Thoughts from the EPS Windows Server Performance Team
On Tuesday, we talked about Heap - what it is and how it works. Today we're going to continue our look at Heap. At the end of the last post we mentioned Look-Aside Lists and Low Fragmentation Heap. But before we dive into those, let's take a look at Heap Synchronization.
The heap manager supports concurrent access from multiple threads by default. A process can also lock the entire heap and prevent other threads from performing heap operations. This is required for operations that require consistent states across multiple heap calls. To use an analogy for this - think about trying to perform some data analysis and someone keeps changing the data in the Excel spreadsheet you are using. Your analysis winds up being skewed because the data keeps changing. If you lock the spreadsheet for your use only, you can complete all your data analysis using consistent data.
Look-Aside lists are single linked lists that allow operations such as "push to the list" or "pop from the list" in a last in, first out (LIFO) order. There are 128 look-aside lists per heap, which handle allocations up to 1KB on 32-bit systems and up to 2KB on 64-bit platforms. Look-aside lists provide performance improvements over normal heap allocations because multiple threads can concurrently perform allocations and deallocations without acquiring the heap global lock. The heap manager maintains the number of blocks in each list. If a thread allocates a block of a size that does not exist in the corresponding look-aside list, the heap manager forwards the call to the core heap manager. The heap manager creates look-aside lists automatically when a heap is created, as long as no debugging options are enabled and the heap is expandable.
Let's move on to Heap fragmentation. Heap fragmentation is when available memory is broken into small, non-contiguous blocks. When this happens, memory allocations can fail even though there may be enough total memory in the heap to satisfy the request. However, since no one block of memory is large enough - the allocation request fails. For applications with a low memory usage, the standard heap is adequate. Allocations will not fail on account of heap fragmentation. However - if the applications allocates memory frequently using different allocation sizes, these allocations may fail due to heap fragmentation.
So looking at the first diagram, we can see is all of the allocated and free memory placed together in contiguous blocks. New memory allocations are satisfied from one big pool of unallocated memory.
Over time, as the program runs, some memory will be freed and the overall heap picture will be changed as shown in the next diagram.
Here, we can see that the allocations for blocks A, C and E have all been released. However, this is causing some fragmentation - unallocated and allocated regions are becoming mixed. Over time this can result in performance degradation and possibly even application failure in a worst-case scenario.
Now we see the possible consequences of the heap fragmentation. Even though we have sufficient overall heap space, we do not have a contiguous memory block large enough to handle the new allocation request as shown in the following diagram.
Windows XP and Windows Server 2003 introduce the low-fragmentation heap (LFH). Application developers can use LFH within their application. LFH avoids fragmentation by managing all allocated blocks in 128 predetermined different block-size ranges (called a bucket). Whenever an application needs to allocate memory from the heap, the LFH chooses the bucket that can allocate the smallest block that is large enough to contain the requested size. The smallest block that can be allocated is 8 bytes. The MSDN Article on Low Fragmentation Heap lists the range of each bucket.
OK - let's take a look at Heap Corruption and using Pageheap.exe for troubleshooting. First, let's define what heap corruption is. Simply put, heap corruption is a situation that occurs whenever misbehaving code corrupts the data heap. One of the most common causes of heap corruption is to write beyond the bounds of a memory allocation. This can corrupt the memory directly before and / or after the allocated bytes. Heap corruption may also occur when an application attempts to write to a block of memory that has already been freed. When this occurs, and the application crashes, one of the first things we do is look at the dump file.
If you remember our post on Basic Debugging of an Application Crash, you can run the !analyze -v command to have the debugger perform a quick analysis in debug mode. When you have a crash caused by heap corruption, you may see information like this returned:
ADDITIONAL_DEBUG_TEXT: Enable Pageheap/AutoVerifer
When we are investigating a heap corruption issue we can use Pageheap to help find the corruption. Pageheap introduces a software validation layer (Page heap manager) between the application and the system that verifies all dynamic memory operations (allocations, deallocations etc). When using Pageheap there are two methods to identify the corruption; full-page heap or normal page heap.
Full-Page heap places a non-accessible page at the end of the allocation. Full-page heap is enabled for individual processes. For large processes, full-page heap is enabled under limited processes. This is due to the high memory requirements. Full-page heap cannot be enabled system-wide because of the difficulties in evaluating the correct page file size. If system-wide full-page heap were enabled, and the page file was too small, the system would not boot. The advantage of using full-page heap is that a process will access violate (AV) exactly at the point of failure.
Normal page heap checks fill patterns when the block gets freed. Normal page heap can be used for testing large-scale process without the high memory consumption overhead of full-page heap. However, normal page heap delays detection until the blocks are freed - thus failures are more difficult to debug.
When full page heap is enabled, guard pages are used and buffer overrun/underrun are caught instantly as the program will access violate at the point of overrun/underrun. These failures are easy to debug because the current stack trace points directly to the broken code. If normal page heap is used or the corruption happens in the small fill pattern at end of buffer for alignment reasons the corruption will be detected only when the block is freed. To make life easier in such cases the page heap manager places a header before all allocations (full and normal). This header contains a few valuable bits of information (owning heap, user requested size and stack trace for the allocation in some cases). The structure of the full and normal page heap blocks is shown below (this is taken from the MSDN Article: The Structure of a Page Heap Block):
Normal page heap block structure
Full page heap block structure
To enable page heap for a single process, we use the Gflags.exe utility via the command line. The usage is very simple:
To control system-wide page heap, we also use the Gflags.exe utility as follows - both of the commands below require a system restart to take effect:
To see a complete list of parameters for the Gflags.exe utility, simply run gflags.exe -? at the command prompt. Gflags also has a GUI interface through which the options for page heap can be set. The GUI interface can be accessed by running gflags.exe with no parameters.
Well, that brings us to the end of this post. Until next time ...
- CC Hameed
PingBack from http://www.mcseboard.de/windows-forum-allgemein-28/debug-explorer-exe-crash-125878.html#post773958
I am debugging a heap corruption error and googling for "Enable Pageheap/AutoVerifer" pointed me straight to this great article.
Thanks lot for such kind of information.
Because of this we could able to crack one of our realease blocker bug for product.
Keep it up.
Thanks team! Clearly explained and easily understood. Very helpful.