Information and announcements from Program Managers, Product Managers, Developers and Testers in the Microsoft Virtualization team.
Preamble: The point of this series, and the spirit in which it is written, is to take a holistic approach at the issues facing our customers, discuss the complexities with regard to memory management and explain why we’re taking the approach we are with Hyper-V Dynamic Memory. This isn’t meant to criticize anyone or technology, rather to have an open and transparent discussion about the problem space.
When it comes to virtualization and memory, customers want to use physical memory as efficiently and dynamically as possible with minimal performance impact and provide consistent performance and scalability.
Looking at the bigger picture
In addition to asking customers about memory and how it relates to virtualization, we took a step back and talked to our customers about the broader topic of memory and capacity planning. Let’s remove virtualization from the equation for the moment. If you were to setup some new physical servers how would you do this? How would you determine the workload memory requirements and the amount of memory to purchase?
If you answered, “it depends,” you’re correct. There isn’t one simple answer to this question. Your mileage will vary based on your workload and business requirements for scale and performance. When we ask customers how they tackle this problem, here are a few of the common answers:
The result is far from optimal. Customers overprovision their hardware and don’t use it efficiently which in turn raises the TCO.
Wouldn’t it be great if your workloads automatically and dynamically allocated memory based on workload requirements and you were provided a flexible policy mechanism to control how these resources are balanced across the system?
We think so too.
In my next blog, we’ll discuss the confusion that is “memory overcommit.”
Windows Server Hyper-V
While overcommit may make VM memory usage more efficient, I dont see how it solves the question "how much ram do I need to buy?". Wouldn't the question just become: "how much ram do you allocate to a hyper-v farm"?
What about apps like Exhange that use RAM based on percentages of the total?
Interesting views, however doesn't leave me any the wiser on how to plan for memory. If I have 32VMs, each guidelined to need 4GB memory, then their gross requirement is 128GB. VMWare published a survey looking at over-commit giving an average of 1.5x virtual to real memory, with a broad spread depending on workload. To my simple mind that says that 86GB, or the next nearest whole sensible number (say, 96GB) would then be a good starting point for the config.
What advice / range would you recommend for planning a Hyper-V server given the above assumptions?