How can we help you today? How can we help you today?
Andrew H

Activity overview

Latest activity by Andrew H

ANTS Memory Profiler 7 EAP Build 667 released
We're pleased to present the next Early Access Build of our forthcoming ANTS Memory Profiler 7, which you can download from here http://downloads.red-gate.com/ANTSProfiler/EAP/ANTSMemoryProfiler_7....
0 followers 0 comments 0 votes
ANTS Performance Profiler 6 beta build 1308 now available
A new beta build of ANTS Performance Profiler 6 is now available! You can download build 1308 here. This build will expire on 14th July 2010. Please post any feature requests or bug reports in this...
0 followers 0 comments 0 votes
Hi, Thanks for the feedback! I'll add your suggestions to our bug tracking system. As far as I can see, the line-level timing option is working in the command-line tool. However, the information isn't included in any of the reports it can generate except for the app6results data file. Your description makes it sound like the profiler is finding either the wrong PDB file or the wrong source file. PDB files are matched against the running executable so ANTS can't show results if it's out of date. Source files aren't matched in this way, so if the locations specified in the PDB file have been changed since the file was created, the lines won't match up. When the profiler is showing results for multiple processes, they should show up in the threads drop-down in the UI (threads will be grouped by process). If only threads are showing up there, then the profiler only profiled a single process. If child processes aren't being profiled when the option is turned on, then it's possible that the profiler environment variables aren't being passed through to the child processes (the important ones are COR_PROFILER, COR_ENABLE_PROFILING and ANTS_PROCESS_GROUP). This is necessary due to the way .NET profiling works. / comments
Hi, Thanks for the feedback! I'll add your suggestions to our bug tracking system. As far as I can see, the line-level timing option is working in the command-line tool. However, the information is...
0 votes
ANTS Performance Profiler 6 EAP is now available
The ANTS Performance Profiler 6 Early Access Program has begun. We've added a new profiling mode, a command line tool and a new way to visualise when a method was running. You can see a more detail...
0 followers 0 comments 0 votes
We do have plans to add improved support for diagnosing large object heap issues in a future version. Our experience is that it's a very common issue. It's quite complex too, as a fragmented heap does not always mean that an application will experience a problem - the total memory usage can stabilize with a large free pool for many workloads. A fragmented heap is the fault of the garbage collector and not really a problem with the application. This means that there's no real way to point at a piece of application code and blame it for the problem - in fact, because the garbage collector can run at any time, it's quite possible for fragmentation to happen for different reasons in different runs of the same application. (Also, the rules for when an object ends up on the large object heap are sometimes unclear: most .NET application end up with a few very small objects there for some reason or another - mostly things associated with loading assemblies) The usual pattern for a fragmentation issue is that a large short-lived object is created before a large long-lived object. This often happens in really unexpected places: for example, adding an object to a list can result in the list being reallocated, which will cause this problem if it happens at the wrong time - the list doesn't even have to be very large. To make things more confusing, the 'short-lived' object can already have been dereferenced at the time the 'long-lived' object is created as there's no guarantee that a garbage collection will have happened in the meantime. What you can do with the current profiler to identify problematic objects is to set the filters to show only objects on the large object heap, and compare snapshots between an idle period before fragmentation has occurred and another idle period after causing fragmentation. New objects that are on the LOH are good possibilities for being the cause of fragmentation: you can use the object retention graph to associate them with your code. We're hoping to add a feature in a future release that will make it possible to positively identify the objects that are responsible for each fragment of the heap, which should take the guesswork out of doing this comparison. If you change the creation order around, for example by pre-allocating some objects or by setting the Capacity property for lists before adding to them, you'll fix the issue in many cases. Routines that process a large object and produce a different large object as a result are good candidates for this: if space is allocated for the result at the start instead of at the end then the processing space won't cause fragmentation (the more natural way to write this would be the reverse, which will always cause fragmentation). Multi-threaded applications throw this out of the window, because it might not be possible to control allocation order (maybe pre-allocating a pool of results and re-using them would work in some cases). Another possible technique is to make all objects as long-lived as possible. Fragments can't form if large objects are never dereferenced. In practice this is rather unwieldy: if fragments form between these very long lived objects this will guarantee that they will never be reclaimed, so new objects can only be allocated when nothing else is using the large object heap. Copying existing large objects into new objects and then throwing the old objects away can also reduce fragmentation if it's done while the application is otherwise idle. This is really cheesy and can be tricky to get right (it might just end up shuffling the fragments around). This technique is interesting because a refined version of it is actually how .NET defragments the small object heaps. A more permanent solution is to only use the large object heap for temporary objects with a well defined and most importantly short lifespan. We use this technique ourselves in the memory profiler to avoid fragmentation. The basic idea is to use lots of small arrays to store data (less than 1000 elements each) instead of a single big array. The problem is that this can be really hard to retrofit into an existing application - a good start might be to implement IList in this way. The memory profiler can help here: anything that shows up in the large object filter while the application is idle is a good candidate for this technique. / comments
We do have plans to add improved support for diagnosing large object heap issues in a future version. Our experience is that it's a very common issue. It's quite complex too, as a fragmented heap d...
0 votes