Mark Russinovich’s technical blog covering topics such as Windows troubleshooting, technologies and security.
In my first Pushing the Limits of Windows post, I discussed physical memory limits, including the limits imposed by licensing, implementation, and driver compatibility. Here’s the index of the entire Pushing the Limits series. While they can stand on their own, they assume that you read them in order.
Pushing the Limits of Windows: Physical Memory Pushing the Limits of Windows: Virtual Memory Pushing the Limits of Windows: Paged and Nonpaged Pool Pushing the Limits of Windows: Processes and Threads Pushing the Limits of Windows: Handles Pushing the Limits of Windows: USER and GDI Objects – Part 1 Pushing the Limits of Windows: USER and GDI Objects – Part 2
Pushing the Limits of Windows: Physical Memory
Pushing the Limits of Windows: Virtual Memory
Pushing the Limits of Windows: Paged and Nonpaged Pool
Pushing the Limits of Windows: Processes and Threads
Pushing the Limits of Windows: Handles
Pushing the Limits of Windows: USER and GDI Objects – Part 1
Pushing the Limits of Windows: USER and GDI Objects – Part 2
This time I’m turning my attention to another fundamental resource, virtual memory. Virtual memory separates a program’s view of memory from the system’s physical memory, so an operating system decides when and if to store the program’s code and data in physical memory and when to store it in a file. The major advantage of virtual memory is that it allows more processes to execute concurrently than might otherwise fit in physical memory.
While virtual memory has limits that are related to physical memory limits, virtual memory has limits that derive from different sources and that are different depending on the consumer. For example, there are virtual memory limits that apply to individual processes that run applications, the operating system, and for the system as a whole. It's important to remember as you read this that virtual memory, as the name implies, has no direct connection with physical memory. Windows assigning the file cache a certain amount of virtual memory does not dictate how much file data it actually caches in physical memory; it can be any amount from none to more than the amount that's addressable via virtual memory.
Each process has its own virtual memory, called an address space, into which it maps the code that it executes and the data that the code references and manipulates. A 32-bit process uses 32-bit virtual memory address pointers, which creates an absolute upper limit of 4GB (2^32) for the amount of virtual memory that a 32-bit process can address. However, so that the operating system can reference its own code and data and the code and data of the currently-executing process without changing address spaces, the operating system makes its virtual memory visible in the address space of every process. By default, 32-bit versions of Windows split the process address space evenly between the system and the active process, creating a limit of 2GB for each:
Applications might use Heap APIs, the .NET garbage collector, or the C runtime malloc library to allocate virtual memory, but under the hood all of these rely on the VirtualAlloc API. When an application runs out of address space then VirtualAlloc, and therefore the memory managers layered on top of it, return errors (represented by a NULL address). The Testlimit utility, which I wrote for the 4th Edition of Windows Internals to demonstrate various Windows limits, calls VirtualAlloc repeatedly until it gets an error when you specify the –r switch. Thus, when you run the 32-bit version of Testlimit on 32-bit Windows, it will consume the entire 2GB of its address space:
2010 MB isn’t quite 2GB, but Testlimit’s other code and data, including its executable and system DLLs, account for the difference. You can see the total amount of address space it’s consumed by looking at its Virtual Size in Process Explorer:
Some applications, like SQL Server and Active Directory, manage large data structures and perform better the more that they can load into their address space at the same time. Windows NT 4 SP3 therefore introduced a boot option, /3GB, that gives a process 3GB of its 4GB address space by reducing the size of the system address space to 1GB, and Windows XP and Windows Server 2003 introduced the /userva option that moves the split anywhere between 2GB and 3GB:
To take advantage of the address space above the 2GB line, however, a process must have the ‘large address space aware’ flag set in its executable image. Access to the additional virtual memory is opt-in because some applications have assumed that they’d be given at most 2GB of the address space. Since the high bit of a pointer referencing an address below 2GB is always zero, they would use the high bit in their pointers as a flag for their own data, clearing it of course before referencing the data. If they ran with a 3GB address space they would inadvertently truncate pointers that have values greater than 2GB, causing program errors including possible data corruption.
All Microsoft server products and data intensive executables in Windows are marked with the large address space awareness flag, including Chkdsk.exe, Lsass.exe (which hosts Active Directory services on a domain controller), Smss.exe (the session manager), and Esentutl.exe (the Active Directory Jet database repair tool). You can see whether an image has the flag with the Dumpbin utility, which comes with Visual Studio:
Testlimit is also marked large-address aware, so if you run it with the –r switch when booted with the 3GB of user address space, you’ll see something like this:
Because the address space on 64-bit Windows is much larger than 4GB, something I’ll describe shortly, Windows can give 32-bit processes the maximum 4GB that they can address and use the rest for the operating system’s virtual memory. If you run Testlimit on 64-bit Windows, you’ll see it consume the entire 32-bit addressable address space:
64-bit processes use 64-bit pointers, so their theoretical maximum address space is 16 exabytes (2^64). However, Windows doesn’t divide the address space evenly between the active process and the system, but instead defines a region in the address space for the process and others for various system memory resources, like system page table entries (PTEs), the file cache, and paged and non-paged pools.
The size of the process address space is different on IA64 and x64 versions of Windows where the sizes were chosen by balancing what applications need against the memory costs of the overhead (page table pages and translation lookaside buffer - TLB - entries) needed to support the address space. On x64, that’s 8192GB (8TB) and on IA64 it’s 7168GB (7TB - the 1TB difference from x64 comes from the fact that the top level page directory on IA64 reserves slots for Wow64 mappings). On both IA64 and x64 versions of Windows, the size of the various resource address space regions is 128GB (e.g. non-paged pool is assigned 128GB of the address space), with the exception of the file cache, which is assigned 1TB. The address space of a 64-bit process therefore looks something like this:
The figure isn’t drawn to scale, because even 8TB, much less 128GB, would be a small sliver. Suffice it to say that like our universe, there’s a lot of emptiness in the address space of a 64-bit process.
When you run the 64-bit version of Testlimit (Testlimit64) on 64-bit Windows with the –r switch, you’ll see it consume 8TB, which is the size of the part of the address space it can manage:
Testlimit’s –r switch has it reserve virtual memory, but not actually commit it. Reserved virtual memory can’t actually store data or code, but applications sometimes use a reservation to create a large block of virtual memory and then commit it as needed to ensure that the committed memory is contiguous in the address space. When a process commits a region of virtual memory, the operating system guarantees that it can maintain all the data the process stores in the memory either in physical memory or on disk. That means that a process can run up against another limit: the commit limit.
As you’d expect from the description of the commit guarantee, the commit limit is the sum of physical memory and the sizes of the paging files. In reality, not quite all of physical memory counts toward the commit limit since the operating system reserves part of physical memory for its own use. The amount of committed virtual memory for all the active processes, called the current commit charge, cannot exceed the system commit limit. When the commit limit is reached, virtual allocations that commit memory fail. That means that even a standard 32-bit process may get virtual memory allocation failures before it hits the 2GB address space limit.
The current commit charge and commit limit is tracked by Process Explorer in its System Information window in the Commit Charge section and in the Commit History bar chart and graph:
Task Manager prior to Vista and Windows Server 2008 shows the current commit charge and limit similarly, but calls the current commit charge "PF Usage" in its graph:
On Vista and Server 2008, Task Manager doesn't show the commit charge graph and labels the current commit charge and limit values with "Page File" (despite the fact that they will be non-zero values even if you have no paging file):
You can stress the commit limit by running Testlimit with the -m switch, which directs it to allocate committed memory. The 32-bit version of Testlimit may or may not hit its address space limit before hitting the commit limit, depending on the size of physical memory, the size of the paging files and the current commit charge when you run it. If you're running 32-bit Windows and want to see how the system behaves when you hit the commit limit, simply run multiple instances of Testlimit until one hits the commit limit before exhausting its address space.
Note that, by default, the paging file is configured to grow, which means that the commit limit will grow when the commit charge nears it. And even when when the paging file hits its maximum size, Windows is holding back some memory and its internal tuning, as well as that of applications that cache data, might free up more. Testlimit anticipates this and when it reaches the commit limit, it sleeps for a few seconds and then tries to allocate more memory, repeating this indefinitely until you terminate it.
If you run the 64-bit version of Testlimit, it will almost certainly will hit the commit limit before exhausting its address space, unless physical memory and the paging files sum to more than 8TB, which as described previously is the size of the 64-bit application-accessible address space. Here's the partial output of the 64-bit Testlimit running on my 8GB system (I specified an allocation size of 100MB to make it leak more quickly):
And here's the commit history graph with steps when Testlimit paused to allow the paging file to grow:
When system virtual memory runs low, applications may fail and you might get strange error messages when attempting routine operations. In most cases, though, Windows will be able present you the low-memory resolution dialog, like it did for me when I ran this test:
After you exit Testlimit, the commit limit will likely drop again when the memory manager truncates the tail of the paging file that it created to accommodate Testlimit's extreme commit requests. Here, Process Explorer shows that the current limit is well below the peak that was achieved when Testlimit was running:
Because the commit limit is a global resource whose consumption can lead to poor performance, application failures and even system failure, a natural question is 'how much are processes contributing the commit charge'? To answer that question accurately, you need to understand the different types of virtual memory that an application can allocate.
Not all the virtual memory that a process allocates counts toward the commit limit. As you've seen, reserved virtual memory doesn't. Virtual memory that represents a file on disk, called a file mapping view, also doesn't count toward the limit unless the application asks for copy-on-write semantics, because Windows can discard any data associated with the view from physical memory and then retrieve it from the file. The virtual memory in Testlimit's address space where its executable and system DLL images are mapped therefore don't count toward the commit limit. There are two types of process virtual memory that do count toward the commit limit: private and pagefile-backed.
Private virtual memory is the kind that underlies the garbage collector heap, native heap and language allocators. It's called private because by definition it can't be shared between processes. For that reason, it's easy to attribute to a process and Windows tracks its usage with the Private Bytes performance counter. Process Explorer displays a process private bytes usage in the Private Bytes column, in the Virtual Memory section of the Performance page of the process properties dialog, and displays it in graphical form on the Performance Graph page of the process properties dialog. Here's what Testlimit64 looked like when it hit the commit limit:
Pagefile-backed virtual memory is harder to attribute, because it can be shared between processes. In fact, there's no process-specific counter you can look at to see how much a process has allocated or is referencing. When you run Testlimit with the -s switch, it allocates pagefile-backed virtual memory until it hits the commit limit, but even after consuming over 29GB of commit, the virtual memory statistics for the process don't provide any indication that it's the one responsible:
For that reason, I added the -l switch to Handle a while ago. A process must open a pagefile-backed virtual memory object, called a section, for it to create a mapping of pagefile-backed virtual memory in its address space. While Windows preserves existing virtual memory even if an application closes the handle to the section that it was made from, most applications keep the handle open. The -l switch prints the size of the allocation for pagefile-backed sections that processes have open. Here's partial output for the handles open by Testlimit after it has run with the -s switch:
You can see that Testlimit is allocating pagefile-backed memory in 1MB blocks and if you summed the size of all the sections it had opened, you'd see that it was at least one of the processes contributing large amounts to the commit charge.
Perhaps one of the most commonly asked questions related to virtual memory is, how big should I make the paging file? There’s no end of ridiculous advice out on the web and in the newsstand magazines that cover Windows, and even Microsoft has published misleading recommendations. Almost all the suggestions are based on multiplying RAM size by some factor, with common values being 1.2, 1.5 and 2. Now that you understand the role that the paging file plays in defining a system’s commit limit and how processes contribute to the commit charge, you’re well positioned to see how useless such formulas truly are.
Since the commit limit sets an upper bound on how much private and pagefile-backed virtual memory can be allocated concurrently by running processes, the only way to reasonably size the paging file is to know the maximum total commit charge for the programs you like to have running at the same time. If the commit limit is smaller than that number, your programs won’t be able to allocate the virtual memory they want and will fail to run properly.
So how do you know how much commit charge your workloads require? You might have noticed in the screenshots that Windows tracks that number and Process Explorer shows it: Peak Commit Charge. To optimally size your paging file you should start all the applications you run at the same time, load typical data sets, and then note the commit charge peak (or look at this value after a period of time where you know maximum load was attained). Set the paging file minimum to be that value minus the amount of RAM in your system (if the value is negative, pick a minimum size to permit the kind of crash dump you are configured for). If you want to have some breathing room for potentially large commit demands, set the maximum to double that number.
Some feel having no paging file results in better performance, but in general, having a paging file means Windows can write pages on the modified list (which represent pages that aren’t being accessed actively but have not been saved to disk) out to the paging file, thus making that memory available for more useful purposes (processes or file cache). So while there may be some workloads that perform better with no paging file, in general having one will mean more usable memory being available to the system (never mind that Windows won’t be able to write kernel crash dumps without a paging file sized large enough to hold them).
Paging file configuration is in the System properties, which you can get to by typing “sysdm.cpl” into the Run dialog, clicking on the Advanced tab, clicking on the Performance Options button, clicking on the Advanced tab (this is really advanced), and then clicking on the Change button:
You’ll notice that the default configuration is for Windows to automatically manage the page file size. When that option is set on Windows XP and Server 2003, Windows creates a single paging file that’s minimum size is 1.5 times RAM if RAM is less than 1GB, and RAM if it's greater than 1GB, and that has a maximum size that's three times RAM. On Windows Vista and Server 2008, the minimum is intended to be large enough to hold a kernel-memory crash dump and is RAM plus 300MB or 1GB, whichever is larger. The maximum is either three times the size of RAM or 4GB, whichever is larger. That explains why the peak commit on my 8GB 64-bit system that’s visible in one of the screenshots is 32GB. I guess whoever wrote that code got their guidance from one of those magazines I mentioned!
A couple of final limits related to virtual memory are the maximum size and number of paging files supported by Windows. 32-bit Windows has a maximum paging file size of 16TB (4GB if you for some reason run in non-PAE mode) and 64-bit Windows can having paging files that are up to 16TB in size on x64 and 32TB on IA64. Windows 8 ARM’s maximum paging file size is is 4GB. For all versions, Windows supports up to 16 paging files, where each must be on a separate volume.
Limit on x86 w/o PAE
Limit on x86 w/PAE
Limit on ARM
Limit on x64
Limit on IA64
Windows Server 2008 R2
Windows Server 2012
Any ideas on this issue...
I noticed that once I opened a number of applications on my Windows XP SP2 (32bit) PC -- latest updates from MS, including IE7, explorer began to choke... I could not open
any more windows and the menus and options would not render anymore, even though there was plenty of free memory available -- windows task manager reported that I was still
using less than 50% of the RAM / Virtual Memory that was available. After some additional investigation, I found that this sort of issue occurrs on a variety of machines.
The first system I tried had 2GB of ram and a 1GB pagefile. Internet explorer/file explorer/"filling in" of prompts will STOP working when your total ram usage is at or
HP 7800 Dual Core PC // 2GB RAM / 1GB PF (winXP (32bit) SP2 / IE7)
- Issue Occurs at about 1.1 GB Commit
The main problem on machines with 2GB of system ram is that explorer stops working at or around 1GB ram used, topping out at 1.1GB. You can use programs up and past the
1GB mark but once you open a few explorer windows you can open no more. Once this issue occurs, even some completely unrelated programs stop working. The strangest one
occurs if you open a cmd window, then type ipconfig --> ipconfig fails and returns no output... unless you close some IE windows first.
Additional Tests administered: (2gb RAM / 1GB PF // HP 7700 )
A) Opened, photoshop, dreamweaver plus a few other programs running to get memory usage to 1GB. Then opened an Internet Explorer instance, and a c:\program files\ through
file explorer. These should open fine, now try opening another 1 or 2+ windows of Internet Explorer. It won't let you open anymore windows, after 2 or 3. Photoshop also
will not fill in forms, or open new images- it does not produce an error, just doesn't do anything. And you'll probably top out at 1.1GB memory used.
B) Opened as many Internet Explorer windows as you can. When it hit the 1.1GB mark (commit), I could open no more.
C) I opened some programs, and 33 Firefox windows, after opening C:\program files\ and a c:\windows\ trying then to open another c:\ I get a partially filled window. (ie:
menus and other objects not rendering) So this is not an Internet Explorer issue.
I have also tested these scenario B and C on the following machines /configurations:
Lenovo P4 Single Core PC // 1GB RAM / 1GB PF (winXP (32bit) SP2 / IE6 and IE7)
- Issue Occurs at about 800 MB Commit
HP 7700 Dual Core PC // 4GB RAM / 1GB PF (winXP (32bit) SP2 / IE7)
- Issue Occurs at about 1.4 GB Commit
Generic (Intel Motherboard / Kingston RAM) INTEL DUAL CORE PC // 8GB RAM / 2GB PF (Win2003 Enterprise R2 (32bit) SP2 / IE7)
- Issue Occurs at about 2.7 GB Commit
Steps already tried:
Pagefile: Various sizes, including no pagefile at all
Boot.ini: adding /3G option (this did not change my results at all, still have a barrier)
If anyone can shed some light on this issue or has a solution, it would be much appreciated!
@Michael s. "I don't think he even implied that LSASS was AD specific."
I have to agree with Kerry C. here. The statement: "Lsass.exe (which hosts Active Directory services on a domain controller)" is somewhat ambiguous. It might have been better written as: "Lsass.exe (which <among other functions> hosts Active Directory services ..."
I'm definitely a novice and the statement confused me because I know I have lsass running and I do not have AD on a single user XP Pro box. It's a truly minor point in an excellent article but a legitimate one IMO.
I noticed the same problem on not being able to open more IE windows after a certain numbers of windows open or after a certain period of time on my newer pcs. I had 2 Dells running intel C2D/2GB that have this problem and i thought it was the pc itself. And then i built another 2 systems using AMD 64 X2 with Abit MB and 4GB ram and they does the same thing. But, i have 2 other older pcs running AMD Athlon XP with 1GB on one and 768mb on another that does not do this. These systems are all running XP Pro SP2(updated to SP3 lately). Still trying to figure out why it does that.
So I have just upgraded my laptop from 2GB RAM to 4GB on a 32-bit Vista installation. I may move to 64-bit at some point, possibly as part of an upgrade to windows 7, but my maximum memory use is closer to 3GB than 3 1/2 GB, so there's just no pressing need at the moment.
Given that I now have more memory than my computer can possibly use (after subtracting various graphics, driver, and legacy sections) given a 32-bit address space, what cost is there to me of disabling the page file entirely? Swapping in this case does not increase the total amount of memory I can use, and should not make my system more stable.
As far as I can see, it should only serve to make my system slower, by aggressively swapping out stuff I still want in memory.
Hi MR, thanks for the wonderful post as per usual. I have a production SQL Dell 1950 server with 16 GB RAM and I am upgrading it to a capacity of 32 GB RAM. It currently has the following:
Peak Bytes: 33,325 Mb
Commit Limit: 37,062 Mb
Commit Peak: 34,269 Mb
Paging Files as follows:
C:\ Min: 2,048 Mb
C:\ Max: 4,096 Mb
D:\ Min: 6,144 Mb
D:\ Max: 10,240 Mb
P:\ Min: 6,144 Mb (Dell MD 3000 SAN)
P:\ Max: 10,240 Mb (Dell MD 3000 SAN)
Paging File Current Allocation: 14,336
I would like to know how I can tweak my paging files to make sure I am using the RAM optimally once I install the additional 16 Gb
Regards in advance
Just curious, I have also noticed this Limit with I.E. Dont think it is IE, just an interface that it uses. However, I also do not have this issue on an older machine. Is it possible that this is related to 64bit proccessors? Even if the OS is 32Bit most modern Proccessors are 64 bit capable?? Could there be something screwy happening here?
Just as an aside, get to your limit with I.E, now close the last window and open Calculator, you can open this 3 or 4 times. I have looked at handles, threads, proccess, memory, nothing is coming close to its limit (Okay the physical memory was 1,7-1,74 GB but that is for me to arbitrary. If it is always 1,73 then its a point. but random values say somehting else to me.
okay, maybe a little misleading in the previous post. It looks like it is reaching the maximum amount of User Handles/Objects and this is causing the strange behaviour. Calculator requires a lot less User Handles than IE :D
I am interested though why I get more user handles in a machine with more RAM? I thought this was a set limit? can there be something with the way Windows is handling user handles when mor RAM is available? (Paging or not reading or whatever?)
okay, this IE thing, it aint IE thing.
Correct me where I am wrong and please fill in the blanks;
Every Proccess in Windows (since i.E NT) has a hard limit of 10000 Handles. Wonderful
Every Windows System (XP+) has a system wide limit of around 1,3 Million Handles. Brilliant
Somewhere is a "User" limit, and I cannot find it. If I open IE winodws until nothing more runs then close 3, start a CMD and run testlimt -h I get 10000. Now open another IE Window and run it again. You get around 6000 Why?! do it again and you get maybe 3k. If these are process limits, why are they getting capped by another process??!! There is another limit that I am missing. Sorry If I cant read and it is described already here.
Is this a limit of the desktop heap or a User Handle limit? I am experiencing this problem on a citrix server and am trying to trouble shoot it. headache.
maybe I am running in the wrong direction, but i got a warm feeling when I saw that handle limit suddenly drop.
I have a Windows 2003 R2 Enterprise server with 32 GB of RAM currently running a 32 GB pagefile. The Peak Commit Charge is around 5 GB. What size should I make the pagefile?
MarkR: Regarding setting the pagefile size, I must disagree with the advice in the article.
You suggest subtracting RAM size from the peak observed commit charge and setting the pagefile size to the difference.
Result: your pagefile plus RAM have enough space to accommodate your peak commit charge.
However this does not leave any room in RAM for code! Or any other mapped files. Or for the operating system's varous nonpageable allocations. Remember, "commit charge" does not include these.
My recommendation is to set up your maximum workload, then use the very convenient performance monitor counter, Page file / %usage peak.
Your pagefile should be large enough to keep this under 25%, 50% at worst. Reason - lots of breathing room helps avoids internal fragmentation of the pagefile.
You really don't want to run the pagefile close to 100% full, especially with Vista and its much larger pagefile writes.
We recently got new Dell Workstation pcs, running 32 bit XP-sp3, with 4 GB ram (dual quad processors). Due to frequent memory error messages, we had virtual memory set to 12280-12280. This problem occurred using MS Word, MS Outlook, Adobe Acrobat Professional (reading 40 mb pdf), ArcMap and Microstation.
No memory problems since. Users can't run defrag or make system changes, so this was the administrator solution.
Note: I was previously using a Dell Optiplex 620, same operating system, with 4 GB ram and never had virtual memory error messages.
As a relative newcomer to computers I am curious,
Given that 32bit windows XP/Vista/Windows 7 cannot address much more than 3GB RAM, and that hardware considerations may reduce this, If a machine has 4GB memory installed (e.g. 2x2GB) Is it possible for the remaining memory (if any) up to 1GB to be used as a RAMDRIVE.
External Constraint prevent me from using a 64bit OS for now, and I feel it would be beneficial to make use of any unused portion of memory. It can be used to speed up systems perhaps by caching commonly used files following system startup or used for Virtual Machines as a container for Virtual Page Files rather than storing these on the Disk.
Thanks for any constructive help and guidance.
No James, XP/Vista/Win7 (x86) all happily address 4GB RAM, I think you are confused about the 2/2 or 1/3 address allocation available to each process.
If instead you are questioning that part of 4GB that cannot be used by the OS due to memory mapped devices, there's no help here either. The device can (or cannot) be mapped above 4GB to make memory within the 4GB available but there's not much a 'user' can do about this.
/out on a limb a little
people have crazy ideas about ramdisks, put their paging files on ramdisk. What this actually does is cause the windows memory manager to page more often to (admittedly) faster 'space'. It may be beneficial in some circumstances but I don't think anyone should generalise a rule about it.
I've seen a couple of things here on SQL, but I am still questioning the recommended pagefile size for a MS SQL server (SQL 2005, 64-bit) with 32 GB of physical RAM. 1.5 x says it should be 48 GB, which seems huge to me. I've looked at the perfmon stats and the pagefile useage is usually less than 10% of my current 4 GB pagefile, so why would I need a bigger pagefile than that?
The way I looking at is that long time ago when x64 did not exist was needed apply the 1,5 figure. Nowadays this is no longer needed. With such as amount of memory and having x64, this is the most important, 4Gb of page file is more than enough.
I strongly recommend you take a look at:
Hope this clears up your question.