The Question around ideal sizing of a Page File has been a topic of conversation that Customers have been asking us recently.
I am going to make a stab in this Blog to put together some of the current thinking around this as I think some of the documentation has been around for some time and can be a source for confusion. I have taken as my resources learned colleagues internal to Microsoft who have helped me put this blog together.
The sizing of a Page File should not just be arbitrarily set. Some thought and consideration should be put into this. The suggestions I am making below can be considered applicable to 2003 and 2008 platforms 32 bit and 64 bit environment. Although special considerations should be given to 64 bit environment, which is referenced in the kb article list at the bottom of this post.
By default if you have less than 2gb of memory on a machine it will set the page file size to 1.5 times the amount of memory in the box. If the amount of memory exceeds 2gb then by default the page file size will be set to between 2046-4092MB . This is expected behaviour However this may not always be the optimum setting.
One of the best recommendations for deciding the pagefile setting is to monitor the following counters in perfmon:-
Counter threshold Suggested value Memory\\Available Bytes No less than 4 MB Memory\\Pages Input/sec No more than 10 pages Paging File\\% Usage No more than 70 percent Paging File\\% Usage Peak No more than 70 percent Process\\Page File Bytes Peak Not applicable
Eg: If the Pagefile %usage shows a value of 70% or higher then we need to consider increasing the pagefile or creating another pagefile on a different disk.
In addition the ideal page file size can be determined by looking at Task Manager / Process Explorer:
· ‘Commit Charge Limit’ - Virtual memory without increasing the page file(RAM + total size of all paging files)
· ‘Commit Charge Peak’ - The peak amount of page file space since the system booted that would have been needed if the system had to page out all private committed virtual memory. (Indicates how close to the limit it is)
The aim of this is to make sure that the Commit charge limit is always larger than the commit charge peak, and the peak never gets close to 70% of the limit.
Process explorer – System Information window provides more information as ‘Percentage of the peak as compared to the limit’ and ‘’the current usage as compared to the limit’ – This is discussed in Windows Internals page 446,447
One other thing to consider is that the Initial and Max pagefile values are the same – this will prevent pagefile expansion which can be a performance killer and can lead to pagefile fragmentation.
Performance can also be increased by splitting the pagefile across disks. - http://support.microsoft.com/?id=197379
Possible suggestion is to start with a 4GB Page File on the root with an additional page file on the data disks and monitor the counters above, tuning accordingly.
Systems with Large Amounts Of Memory
However on systems with larger amount of memory if you have for example a setup such as below
On a 32GB memory x64 bit system it would be advisable to manually set 8GB pagefile on the Windows drive (min max both set to 8GB). This would then provide plenty of space for paging and plus allows a kernel only dump to be produced (in all but the worse case scenarios)
Mark Russinovich touches upon the subject of pagefile sizing for modern systems
Plus he also mentions;
“The maximum is either three times the size of RAM or 4GB, whichever is larger.”
[although it states that is for Vista and Server 2008, but seems to apply to Server 2003 as well]
What you must consider is that it is essentially impractical to create a page file on a system which has 32gb of memory to be 32gb x 1.5. Consider the following factors;
1. The amount of time it will take to create it
2. What practical use will it be
3. The amount of disk space you need to ensure you have on the boot partition
3. How will you transport this huge file to be analysed
The size of the page file is dependent on the function and running applications on the server - Looking at the above discussion on MSDN it seems that SQL server should not be performing much paging so if a ‘kernel mode’ dump is sufficient and you don’t have any other applications that may incur heavy paging(Biztalk?!?), you could consider a relatively small page file, and monitor the usage of the page file in performance monitor to ensure its sufficient.
A frequently asked question is how big should I make the pagefile? There is no single answer to this question, because it depends on the amount of installed RAM and how much virtual memory that workload requires. If there is no other information available, the normal recommendation of 1.5 times the amount of RAM in the computer is a good place to start. On server systems, a common objective is to have enough RAM so that there is never a shortage and the pagefile is essentially, not used. On these systems, having a really large pagefile may serve no useful purpose. On the other hand, disk space is usually plentiful, so having a large pagefile (e.g. 1.5 times the installed RAM) does not cause a problem and eliminates the need to fuss over how large to make it.