Comments
Sort by recent activity
I've been looking into this, and it appears to be an issue caused by a bug in the .NET profiling interface. Some of the int32 arrays being allocated by BuggyBits are not being reported by .NET as objects to the profiler, although the objects that reference them are (and are apparently still valid as objects and not candidates for garbage collection). I suspect there's a configuration element to this issue as well, which is why it does not show up on everyone's system.
You can see this yourself by looking at the class list and comparing the count of System.Int32[] objects to News_aspx objects: there should be at least as many arrays as pages, but when this bug occurs there are typically around 100 'missing' arrays.
I'm looking into ways to work around this; unfortunately, it's not easy to recover the missing results, which means that they will likely appear as free space.
What I have found that works is disabling the server garbage collector as described here: http://support.microsoft.com/kb/911716 - I'm not sure if the bug is directly connected to this. There may still be circumstances where this can happen with workstation GC, and this is not an ideal fix as it changes the garbage collector behaviour.
This bug can also manifest as a CouldNotMapFileException, with an IOException code of 0x8 - the same workaround should work. / comments
I've been looking into this, and it appears to be an issue caused by a bug in the .NET profiling interface. Some of the int32 arrays being allocated by BuggyBits are not being reported by .NET as o...
The answer is 'it depends', but in general the differences will be minor enough not to be significant.
It's possible a debug build will use extra debugging code which will allocate more objects, but usually this will be an insignificant amount of extra objects.
It's also possible that optimisations in the code will mean that some objects can be garbage collected sooner in release builds, but once again in most cases the effect will be insignificant. In debug builds, local variables are kept around for as long as a method is running so they can be inspected by the debugger but in release builds they may be destroyed early. / comments
The answer is 'it depends', but in general the differences will be minor enough not to be significant.
It's possible a debug build will use extra debugging code which will allocate more objects, bu...
I'm only able to reproduce this error for the first COM+ application I profile: subsequent sessions seem to work without an issue until I reboot my XP machine. The issue appears not to be present at all for me under Vista. Does restarting the application immediately after the failure fix the problem, or does it persist?
The problem appears to be due to a module load finished event arriving without a matching module load start event - I suspect that this could be due to a race condition in the CLR, but having to reboot my machine every time I want to try to reproduce this issue makes a concrete diagnosis difficult. / comments
I'm only able to reproduce this error for the first COM+ application I profile: subsequent sessions seem to work without an issue until I reboot my XP machine. The issue appears not to be present a...
That's not unexpected, I think: the profiling should only start when the .NET runtime is loaded by IIS, which doesn't happen until you navigate to the first .aspx page in this case.
There's actually no memory being used by .NET at all until this happens, which is why the take snapshot button remains disabled. / comments
That's not unexpected, I think: the profiling should only start when the .NET runtime is loaded by IIS, which doesn't happen until you navigate to the first .aspx page in this case.
There's actuall...
The 'file has been modified' warning is shown if the time of modification of the file is different to what it was when profiling was carried out. This is usually a reliable indicator but can also be flagged up if the results are taken between two machines with their own copy of the same source code. The warning is flagged up because if the profiler isn't saving source code along with the results then any modifications will make the line-level timings meaningless.
Line-level timings not lining up with the source code can be caused by a couple of problems: the most common is that the application that was profiled was compiled from a different version of the source code.
A less common cause can be that the C# compiler records different line positions from those used by our source control for some reason. Visual Studio usually warns about inconsistent line endings in situations where this can occur. We don't currently know about any situations which can cause this, though exotic unicode line endings or odd combinations of the ASCII \r and \n characters could potentially cause a conflict.
The occasional inability to drill down through a function can happen for a few reasons: this facility is implemented by inspecting the source code and comparing it to the generated call stacks. If the line-level data isn't matching up to the source code for any reason, it can be impossible to determine which function calls correspond to which line of code.
Interfaces, events, anonymous delegates, the yield operator and other C# features can result in functions being called at run-time that look very different to the way they are specified in the source code, which will make this feature fail to work. / comments
The 'file has been modified' warning is shown if the time of modification of the file is different to what it was when profiling was carried out. This is usually a reliable indicator but can also b...
It took me a little while to work this one out [image] It's odd, but you're not seeing something that indicates a memory leak here, I think.
System.Generic.List contains a static reference to an empty array, so that it can represent 0 item lists more effectively: that's the array that you're seeing after you click the button. Note that it's actually there beforehand as well: you'll see two arrays when you take a snapshot before clicking the button.
It's not a memory leak, but rather an implementation detail of the .NET framework. You'll note that the array is always very small and there's only every one of these.
The pattern [GC Handle]->System.Object[]->MyInfoClass[] usually indicates a static variable: this is how they're implemented internally by the .NET framework. The profiler does try to identify which variable contains a particular object provided you're using .NET 2 or later, but there are some CLR limitations that prevent it from working reliably in all cases. Generic classes, such as List<T> is one such example, unfortunately, which is why this gets presented in this way. / comments
It took me a little while to work this one out It's odd, but you're not seeing something that indicates a memory leak here, I think.
System.Generic.List contains a static reference to an empty arr...
No: private bytes includes free space on the .NET heaps as well as unmanaged memory usage. You can find out which is responsible by looking at the breakdown at the bottom of the snapshot summary page: the free space on all .NET heaps value is included in the private bytes.
If this value is large and the largest free block is small, your program is suffering from fragmentation of the large object heap. See http://www.simple-talk.com/dotnet/.net- ... ject-heap/ for a description of the problem.
If the value is small, then it's likely that your program is suffering from an unmanaged memory leak of some variety. If the unmanaged memory is being used by .NET objects then you should be able to find the problem by looking for objects whose instance count is increasing, or by looking for objects on the finaliser queue that have not been disposed. The 'Kept in memory only by GC roots of type COM+' filter may also reveal .NET objects that have been leaked through unmanaged code. / comments
No: private bytes includes free space on the .NET heaps as well as unmanaged memory usage. You can find out which is responsible by looking at the breakdown at the bottom of the snapshot summary pa...
The profiler performs a single full garbage collection while taking a snapshot. It's invoked via a different .NET API but shouldn't have an effect that's any different from GC.Collect().
There are a few things that can cause .NET memory usage to appear not to change after a garbage collection:
* The # bytes in all heaps counter is often not accurate as it's only updated after a garbage collection, and there are times when .NET allocates more memory without causing a GC (and rare times when it releases memory outside of a GC). The private bytes counter is updated in real time.
* Fragmentation of the large object heap can prevent .NET from releasing memory back to the system. You can see free space in .NET heaps in the memory profiler, as well as the largest free contiguous block: if there is a lot of free space but the largest contiguous block is small then fragmentation is occurring.
* Objects referenced by objects with finalizers require at least 2 garbage collection cycles to be removed from memory, and possibly more if these references themselves have finalizers.
* .NET can maintain a pool of free memory for future object allocations if it thinks that there will be a lot of them (this allows it to postpone future garbage collections). It might decide to release this back to the system if there is another GC and not much of the free memory is used. This is often beneficial for the performance of server style applications. / comments
The profiler performs a single full garbage collection while taking a snapshot. It's invoked via a different .NET API but shouldn't have an effect that's any different from GC.Collect().
There are ...
We do only show the shortest paths in the object reference graph at the moment: we're planning to look at new ways of exploring the graph in future versions. However, the roots that we do show are only the strong roots for an object: weak roots (and hence weak references) are deliberately excluded from the graph, as there are usually too many of these in WinForms or WPF applications for the graph to make any sense.
You can use the class reference graph to explore the full relationships between objects, and switch to the object list or graph when you find something interesting there. The filters can be useful for narrowing down cases where there are a lot of similar objects. For a UI library like WPF you typically find that everything eventually references everything else in some way - it's for this reason that we prefer to use the shortest path, as when there's a loop of references, the objects nearer to a root are typically nearer to the 'start' of the loop as seen from the point of view of the program.
The object graph highlights any objects that make a loop in a blue box - any objects so highlighted have a path linking them in both directions - typically through parent/child fields but sometimes via more complicated paths. / comments
We do only show the shortest paths in the object reference graph at the moment: we're planning to look at new ways of exploring the graph in future versions. However, the roots that we do show are ...
You will need to run the profiler on the machine running the application you wish to profile. In general it is better to try to reproduce the issue you are seeing in a test environment rather than a live environment (though if it only shows up under real life loads, the live environment might be the only option). ANTS has to restart the app pool and will slow down its operation while profiling, although the v5 memory profiler is designed to be very low impact compared to its predecessor.
ANTS determines whether or not running code is yours or not by looking at the program debug data (.pdb files). For websites, you will also have to add the debug option to the web.config files if you want this feature. Visual Studio is not required for this. Even if you don't have the PDB files you can still profile method-level timings for all methods: after you have the results you will need to set ANTS to show all methods instead of only methods with source (the default). / comments
You will need to run the profiler on the machine running the application you wish to profile. In general it is better to try to reproduce the issue you are seeing in a test environment rather than ...