Comments
Sort by recent activity
It took me a little while to work this one out [image] It's odd, but you're not seeing something that indicates a memory leak here, I think.
System.Generic.List contains a static reference to an empty array, so that it can represent 0 item lists more effectively: that's the array that you're seeing after you click the button. Note that it's actually there beforehand as well: you'll see two arrays when you take a snapshot before clicking the button.
It's not a memory leak, but rather an implementation detail of the .NET framework. You'll note that the array is always very small and there's only every one of these.
The pattern [GC Handle]->System.Object[]->MyInfoClass[] usually indicates a static variable: this is how they're implemented internally by the .NET framework. The profiler does try to identify which variable contains a particular object provided you're using .NET 2 or later, but there are some CLR limitations that prevent it from working reliably in all cases. Generic classes, such as List<T> is one such example, unfortunately, which is why this gets presented in this way. / comments
It took me a little while to work this one out It's odd, but you're not seeing something that indicates a memory leak here, I think.
System.Generic.List contains a static reference to an empty arr...
No: private bytes includes free space on the .NET heaps as well as unmanaged memory usage. You can find out which is responsible by looking at the breakdown at the bottom of the snapshot summary page: the free space on all .NET heaps value is included in the private bytes.
If this value is large and the largest free block is small, your program is suffering from fragmentation of the large object heap. See http://www.simple-talk.com/dotnet/.net- ... ject-heap/ for a description of the problem.
If the value is small, then it's likely that your program is suffering from an unmanaged memory leak of some variety. If the unmanaged memory is being used by .NET objects then you should be able to find the problem by looking for objects whose instance count is increasing, or by looking for objects on the finaliser queue that have not been disposed. The 'Kept in memory only by GC roots of type COM+' filter may also reveal .NET objects that have been leaked through unmanaged code. / comments
No: private bytes includes free space on the .NET heaps as well as unmanaged memory usage. You can find out which is responsible by looking at the breakdown at the bottom of the snapshot summary pa...
The profiler performs a single full garbage collection while taking a snapshot. It's invoked via a different .NET API but shouldn't have an effect that's any different from GC.Collect().
There are a few things that can cause .NET memory usage to appear not to change after a garbage collection:
* The # bytes in all heaps counter is often not accurate as it's only updated after a garbage collection, and there are times when .NET allocates more memory without causing a GC (and rare times when it releases memory outside of a GC). The private bytes counter is updated in real time.
* Fragmentation of the large object heap can prevent .NET from releasing memory back to the system. You can see free space in .NET heaps in the memory profiler, as well as the largest free contiguous block: if there is a lot of free space but the largest contiguous block is small then fragmentation is occurring.
* Objects referenced by objects with finalizers require at least 2 garbage collection cycles to be removed from memory, and possibly more if these references themselves have finalizers.
* .NET can maintain a pool of free memory for future object allocations if it thinks that there will be a lot of them (this allows it to postpone future garbage collections). It might decide to release this back to the system if there is another GC and not much of the free memory is used. This is often beneficial for the performance of server style applications. / comments
The profiler performs a single full garbage collection while taking a snapshot. It's invoked via a different .NET API but shouldn't have an effect that's any different from GC.Collect().
There are ...
We do only show the shortest paths in the object reference graph at the moment: we're planning to look at new ways of exploring the graph in future versions. However, the roots that we do show are only the strong roots for an object: weak roots (and hence weak references) are deliberately excluded from the graph, as there are usually too many of these in WinForms or WPF applications for the graph to make any sense.
You can use the class reference graph to explore the full relationships between objects, and switch to the object list or graph when you find something interesting there. The filters can be useful for narrowing down cases where there are a lot of similar objects. For a UI library like WPF you typically find that everything eventually references everything else in some way - it's for this reason that we prefer to use the shortest path, as when there's a loop of references, the objects nearer to a root are typically nearer to the 'start' of the loop as seen from the point of view of the program.
The object graph highlights any objects that make a loop in a blue box - any objects so highlighted have a path linking them in both directions - typically through parent/child fields but sometimes via more complicated paths. / comments
We do only show the shortest paths in the object reference graph at the moment: we're planning to look at new ways of exploring the graph in future versions. However, the roots that we do show are ...
You will need to run the profiler on the machine running the application you wish to profile. In general it is better to try to reproduce the issue you are seeing in a test environment rather than a live environment (though if it only shows up under real life loads, the live environment might be the only option). ANTS has to restart the app pool and will slow down its operation while profiling, although the v5 memory profiler is designed to be very low impact compared to its predecessor.
ANTS determines whether or not running code is yours or not by looking at the program debug data (.pdb files). For websites, you will also have to add the debug option to the web.config files if you want this feature. Visual Studio is not required for this. Even if you don't have the PDB files you can still profile method-level timings for all methods: after you have the results you will need to set ANTS to show all methods instead of only methods with source (the default). / comments
You will need to run the profiler on the machine running the application you wish to profile. In general it is better to try to reproduce the issue you are seeing in a test environment rather than ...
That certainly does sound like an unusually high amount of overhead.
Are you using a virtualized environment? Some VM implementations don't provide a very efficient way of reading the CPU time stamp counter which can have a negative effect on the performance of the profiler. A warning is written to the log file if this is detected.
On XP you should be able to find this in C:\Documents and Settings\<user name>\Local Settings\Application Data\Red Gate\ANTS Performance Profiler 4. The message will read something like "Profiler overhead calculated as 2435 ticks when making a method call, and 2300 ticks to the method prologue/epilogue. These values appear to be too high, so corrections made to this data may not be very accurate." (Running natively, the overhead is typically around 10-80 ticks depending on CPU type, and no message is added if it is in this range)
Also, are you using .NET 1.1? The profiler API available in that version of .NET is much less efficient than the one introduced into .NET 2.0. For many applications you can work around this by forcing them to run using the .NET 2.0 framework while profiling using an application config file. / comments
That certainly does sound like an unusually high amount of overhead.
Are you using a virtualized environment? Some VM implementations don't provide a very efficient way of reading the CPU time stam...
As Stephen said, using a less detailed profiling mode will help a lot - in particular turning off line-level timing.
One other thing that might help is to turn on the 'enable inlining' option in the options dialog: this does reduce the detail that the profiler can record but can have a significant effect on application performance. We're changing this to be the default in version 5. / comments
As Stephen said, using a less detailed profiling mode will help a lot - in particular turning off line-level timing.
One other thing that might help is to turn on the 'enable inlining' option in th...
Ah, I think I see what's going on here: the profiler can't find any debugging symbols for the service so it doesn't think that any methods have source code associated with them.
You should be able to switch the Display drop-down to show all methods in order to see the missing methods: you'll see all the framework calls as well, so the results can get a little cluttered.
To enable the profiler to identify methods with source, you'll need to put the .pdb files generated when you compiled it in the same directory as your service: they will show up in bold when the profiler can identify them. / comments
Ah, I think I see what's going on here: the profiler can't find any debugging symbols for the service so it doesn't think that any methods have source code associated with them.
You should be able ...
'Thread blocked' is the time a thread spends inactive, waiting to be woken up (eg, because it is waiting for IO to complete, some user interaction or for a lock to be acquired). It's not time spent running on the CPU and is often not very interesting in terms of performance, so you can select 'CPU time' to eliminate this time from the results. You can also use the thread drop down to focus on individual threads.
In this case, I think you're seeing one of the garbage collection threads waiting to be activated. A top-level 'thread blocked' like this indicates a sleeping .NET thread that isn't running any .NET code at all. / comments
'Thread blocked' is the time a thread spends inactive, waiting to be woken up (eg, because it is waiting for IO to complete, some user interaction or for a lock to be acquired). It's not time spen...
The intended behaviour hasn't really changed between the two versions, but ANTS 4 is very much faster than ANTS 3 and we took advantage of that to capture much more detail about the application being profiled.
The Time measurement is intended to be the time spent exclusively in a certain method, excluding any methods that it calls (framework or otherwise). However, ANTS 3 did not measure the time spent in methods without source by default, so these would be measured as part of the method that made the call: this made the Time measurement inaccurate in the older version of the profiler.
Time with children is the actual overall amount of time spent running a method. It's the best indication of which methods 'feel slow'. You may want to play with the CPU/wallclock time setting depending on what your application is doing to get the best view of the results. ANTS 3 didn't have this option and always showed wallclock time.
ANTS 4 does have a similar feature, you can set it to profile only methods with source, and it will show timings in the same way as before if you choose this option. However, as there is much less detail, there will be no indication of why a particular method is slower, and ANTS 4 doesn't have line-level timing support in this mode.
We've added a much better way of exploring how functions relate to each other, though, in the form of the call graph. If you've found that the Select method is taking a lot of time, but want to relate it to your code, you can click the call graph icon next to the call and expand it upwards to find your methods: if it's called from many different places, you will see all of them drawn out as a tree and how much time each of your methods is spending in the Select call. / comments
The intended behaviour hasn't really changed between the two versions, but ANTS 4 is very much faster than ANTS 3 and we took advantage of that to capture much more detail about the application bei...