Comments
1 comment
-
You'll need to switch the profiler to wallclock time to see this: in CPU time, all blocking functions are marked as having 0 time. If you're using the sampling mode, the profiler won't distinguish the reason that threads became blocked, so you'll also need to use one of the other modes.
The best way to investigate this issue is to look at individual threads of interest using the all methods view: the time spent blocked will appear as orange methods. You can investigate which methods are responsible for particular types of blocking using the call graph.
The new green 'method event' bars shown in the timeline can be helpful as well: these will show you exactly when blocking was occurring, and you can click them to analyse only what the program was doing during that time. The tooltip for these bars can be used to determine which thread a particular method was running on as well.
It's possible to use the 'all threads' display to look at this as well, but the display is harder to interpret: the profiler adds up time spent in methods executing in parallel as if they were executing in series, so idle threads tend to appear to contribute too much time to the total.
Add comment
Please sign in to leave a comment.
Where specifically can I see this information in the profiler user interface? Apologies if I missed something obvious in the documentation, but I've been looking for a bit now and it hasn't jumped out at me yet. It's very useful information for me since the app I'm profiling spends a significant amount of time in network I/O and lock contention.