How can we help you today? How can we help you today?
Andrew H
Hi, Thanks for the feedback! I'll add your suggestions to our bug tracking system. As far as I can see, the line-level timing option is working in the command-line tool. However, the information isn't included in any of the reports it can generate except for the app6results data file. Your description makes it sound like the profiler is finding either the wrong PDB file or the wrong source file. PDB files are matched against the running executable so ANTS can't show results if it's out of date. Source files aren't matched in this way, so if the locations specified in the PDB file have been changed since the file was created, the lines won't match up. When the profiler is showing results for multiple processes, they should show up in the threads drop-down in the UI (threads will be grouped by process). If only threads are showing up there, then the profiler only profiled a single process. If child processes aren't being profiled when the option is turned on, then it's possible that the profiler environment variables aren't being passed through to the child processes (the important ones are COR_PROFILER, COR_ENABLE_PROFILING and ANTS_PROCESS_GROUP). This is necessary due to the way .NET profiling works. / comments
Hi, Thanks for the feedback! I'll add your suggestions to our bug tracking system. As far as I can see, the line-level timing option is working in the command-line tool. However, the information is...
0 votes
We do have plans to add improved support for diagnosing large object heap issues in a future version. Our experience is that it's a very common issue. It's quite complex too, as a fragmented heap does not always mean that an application will experience a problem - the total memory usage can stabilize with a large free pool for many workloads. A fragmented heap is the fault of the garbage collector and not really a problem with the application. This means that there's no real way to point at a piece of application code and blame it for the problem - in fact, because the garbage collector can run at any time, it's quite possible for fragmentation to happen for different reasons in different runs of the same application. (Also, the rules for when an object ends up on the large object heap are sometimes unclear: most .NET application end up with a few very small objects there for some reason or another - mostly things associated with loading assemblies) The usual pattern for a fragmentation issue is that a large short-lived object is created before a large long-lived object. This often happens in really unexpected places: for example, adding an object to a list can result in the list being reallocated, which will cause this problem if it happens at the wrong time - the list doesn't even have to be very large. To make things more confusing, the 'short-lived' object can already have been dereferenced at the time the 'long-lived' object is created as there's no guarantee that a garbage collection will have happened in the meantime. What you can do with the current profiler to identify problematic objects is to set the filters to show only objects on the large object heap, and compare snapshots between an idle period before fragmentation has occurred and another idle period after causing fragmentation. New objects that are on the LOH are good possibilities for being the cause of fragmentation: you can use the object retention graph to associate them with your code. We're hoping to add a feature in a future release that will make it possible to positively identify the objects that are responsible for each fragment of the heap, which should take the guesswork out of doing this comparison. If you change the creation order around, for example by pre-allocating some objects or by setting the Capacity property for lists before adding to them, you'll fix the issue in many cases. Routines that process a large object and produce a different large object as a result are good candidates for this: if space is allocated for the result at the start instead of at the end then the processing space won't cause fragmentation (the more natural way to write this would be the reverse, which will always cause fragmentation). Multi-threaded applications throw this out of the window, because it might not be possible to control allocation order (maybe pre-allocating a pool of results and re-using them would work in some cases). Another possible technique is to make all objects as long-lived as possible. Fragments can't form if large objects are never dereferenced. In practice this is rather unwieldy: if fragments form between these very long lived objects this will guarantee that they will never be reclaimed, so new objects can only be allocated when nothing else is using the large object heap. Copying existing large objects into new objects and then throwing the old objects away can also reduce fragmentation if it's done while the application is otherwise idle. This is really cheesy and can be tricky to get right (it might just end up shuffling the fragments around). This technique is interesting because a refined version of it is actually how .NET defragments the small object heaps. A more permanent solution is to only use the large object heap for temporary objects with a well defined and most importantly short lifespan. We use this technique ourselves in the memory profiler to avoid fragmentation. The basic idea is to use lots of small arrays to store data (less than 1000 elements each) instead of a single big array. The problem is that this can be really hard to retrofit into an existing application - a good start might be to implement IList in this way. The memory profiler can help here: anything that shows up in the large object filter while the application is idle is a good candidate for this technique. / comments
We do have plans to add improved support for diagnosing large object heap issues in a future version. Our experience is that it's a very common issue. It's quite complex too, as a fragmented heap d...
0 votes
I think that stack overflow article could be a bit misleading. It doesn't make sense to set local variables to null: the JIT will actually produce code that will do that for you if you're using an optimised build. It won't do so in debug builds: it keeps the value around for longer so it can be inspected in a debugger (this almost never makes a difference in practice). With fields, it makes sense to set a reference to null at the point a longer lived object has finished with one that's supposed to be shorter lived. If you don't, both objects will stay in memory for the same length of time. This doesn't really produce a memory leak as such, but can lead to higher overall memory usage. If two objects are part of the same structure - that is, they are supposed to be destroyed together then setting their references to null doesn't make sense: the garbage collector will get them both at the same time. For this reason, most of the time it doesn't make sense to set references to null in a Dispose method (but it does make sense to set a reference to null after Dispose has been called on it, as you're not supposed to use it again after that point). As to Private Bytes: note that this includes free space that the CLR has reserved for itself and hasn't yet returned to the operating system, so it's possible a fair proportion of that is being unused. This amount of free space is tuned by the CLR to try to maximise performance: it will keep a lot of free memory around if it thinks that your program is about to allocate a lot of objects, for example. It can also have difficulty freeing memory back up again if the large object heap has become fragmented (which depends at least partially on the order of garbage collections and allocations). It's not harmful in most cases for .NET to have a lot of free memory around: if it's unused then the operating system is quite good at swapping it out to disk. Heap fragmentation can be a problem, though: it leads to out of memory exceptions, usually while there's theoretically a lot of free space still available. You can see if this is occurring by looking at the summary page: if the number of free bytes is much larger than the largest free block, then your application is almost certainly suffering from fragmentation - if the free bytes are increasing over time then you will eventually suffer an out of memory condition due to this problem. Fixing these issues can be a black art: nulling out references to large objects can make things better or worse depending on what the rest of the program is doing at the time. / comments
I think that stack overflow article could be a bit misleading. It doesn't make sense to set local variables to null: the JIT will actually produce code that will do that for you if you're using an ...
0 votes