Sharing of thoughts and information is what blogging is all about. This way we can learn from each other. Post A Comment!These postings are provided "AS IS" with no warranties, and confers no rights. You assume all risk for your use.
Chris Di LulloSr. IT Pro Marketing Manager Twitter | LinkedIn Pierre Roman Twitter | LinkedIn Mitch Garvis Twitter | LinkedIn Anthony Bartolo Twitter | LinkedIn
This week I delivered a number of sessions at Prairie IT & DevCon in Calgary. These included a SOLD OUT IT Virtualization Boot Camp, an IT Camp where most of the conversation centered around Windows Server “8” Beta, as well as 2 sessions at the conference itself on What’s New in Windows Server “8” Beta for Hyper-V – where we only scratched the surface on all the great capabilities you can find in this exciting new operating system. While I was delivering these sessions, the inevitable question of how Hyper-V in Windows Server “8” Beta stacks up against VMware popped up. I had planned to write a blog post about this but Mitch Garvis was kind enough to do it for me, so I figured I’d share it with you here.
The short answer – Windows Server “8” Beta Hyper-V looks really good when compared with VMware. Take a look at the numbers and let me know whether or not you agree, then go download the beta and try if for yourself!
It’s here! Ok, what I should say is that its BETA edition is here! Windows Server “8” is going to be a game changer for all sorts of reasons. However for those people who have been saying that Hyper-V is not ready for prime-time (it has been for a while) the new limits are going to make a lot of people re-evaluate that position.
1 Terabyte of RAM per Virtual Machine The previous limitation of 64 gigabytes per virtual machine in Windows Server 2008 R2 Hyper-V did not limit most workloads, but there are certainly cases for some servers that do need more RAM then that for very large workloads. I don’t think you are going to see a lot of virtual machines running the full terabyte anytime soon… but being able to break the 64GB barrier is nice! VMware went the same way in vSphere 5, whereas their previous limit was 255GB. (vSphere 5.0: same)
160 Logical Processors per Host The 160 LPs (includes cores, hyper-threads) is going to keep coming up as long as Intel and AMD keep putting more cores onto a CPU, and more CPUs onto the board. VMware went the same way, up to 160 LPs. (vSphere 5: same)
1024 Virtual Machines per Host With a previous limitation of 384 VMs per host I used to wonder who really needed that. However when taking into account how much RAM can go into a host (2TBs) that is a lot of room for a lot of workloads. Add to that the fact that CPUs are more powerful than ever (Thanks to Moore’s Law) and any respectable datacentre stores their virtual machines on external storage, we are in a place where it makes sense to put more and more VMs on a single hosrt. With that being said, it is not likely that companies are going to run that high density under normal conditions, but when planning a failover environment you can now plan for fewer failover hosts (if necessary). VMware also boosted their limitations, previously at 320 VMs per host, they increased their limit: (vSphere 5: 512 VMs per host)
64 Nodes per Cluster For all of those who have badmouthed Microsoft clustering over the years (I am one of them) Failover Cluster Services in Windows Server 2008 / R2 was a breath of fresh air. What was previously daunting and scary was made friendly and useable, and now it is not uncommon to see small business customers implementing failover clustering (see Busting the Myth: You cannot cluster Windows Small Business Server) in environments that were previously too small for it to be cost efficient. In Windows Server “8” Beta Microsoft has increased the maximum number of nodes in a cluster from 16 to 64, which is huge for datacentre environments that really need that scale. VMware has also increased their number, but not to the same level. (vSphere 5: 32 nodes per cluster)
4000 Virtual Machines per Cluster By my math, if you can have up to 1024 virtual machines per host, and up to 64 nodes in a cluster, the theory should be that you could support up to 32,768 virtual machines in the cluster that would support up to half of the hosts failing simultaneously before you max out your resources. Obviously someone on the product team knows something that I don’t (probably several somethings) and caps it to 4000 VMs per cluster, a 300% increase over the number of VMs in a cluster supported in Windows Server 2008 R2 Hyper-V, which was capped at 1000. This is a huge lead over VMware, whose limits have not also increased from vSphere 4.1 to vSphere 5 but not to the same extent. (vSphere 5: 3000 virtual machines per cluster)
32 virtual CPUs per Virtual Machine Here is where Microsoft has really hit a home run. previous versions of Hyper-V limited your virtual CPUs to four. Kicking this up to 32 shoots way past VMware’s previous version, and matches their current limits. If you have virtual machines that require huge processing capacity you can go as high as you want… with the limiting factor being your physical hardware (you cannot assign more virtual CPUs than you have physical cores, including hyper-threading). This will be another game changer and will go a long way to proving the enterprise-readiness of Hyper-V. (vSphere 5 limit: same)
64 Terabytes per Virtual Hard Drive Advantage: Microsoft… in a huge way. With the previous limit of two terabytes per VHD file, the new and improved VHDX file format shoots through the ceiling and will support much larger volumes. While most of us have no need for volumes this large, there are customers who have been using either pass-through disks (or RDMs or extents in VMware) to support large database files. VMware’s VMDK files will still be limited to 2TB, but can be expanded to 64TB using extents (which I am not a fan of). As well, they also offer support for 64TB volumes in Raw Device Maps, but in Physical Compatibility Mode only. (vSphere 5: 2TB)
Other Features There are too many new features to mention, and over the next few months I will be writing about these and more in more detail. Both Microsoft and VMware have added support for UEFI boot systems; VMware is offering a better graphical experience in your virtual machines that now support Aero graphic capabilities in Windows 7; Microsoft’s RemoteFX is going to be huge… but there isn’t much I am currently allowed to say about it, except for the fact that you are going to like your VDI experience going forward with Windows Server “8” Beta!
There is a lot more to say, but I do not want to flirt with my NDA. If you are an IT Pro it is time for you to download the bits for Windows Server “8” beta, install it, play with it, and get used to it. You are going to noticed a huge difference over 2008, and if you don’t fall in love with it, I will give you a money-back guarantee (yes, the beta is free).
The future of Windows Server Virtualization is BRIGHT… and for the proponents of VMware who feel that nobody will ever touch them, I look forward to seeing the two sides push each other to make the experience better and more powerful, because that way it is the IT shops – the administrators, the IT Pros – who really win!
This post also appears on garvis.ca.