virtualboy blog

Matt McSpirit on Virtualisation, Management and Core Infrastructure

Investigating the VMware Cost-Per-Application Calculator

Investigating the VMware Cost-Per-Application Calculator

  • Comments 14
  • Likes

As readers of this blog will be well aware, one of the key additions to Hyper-V R2, was Live Migration.  Prior to R2, Hyper-V had a capability known as Quick Migration, which enabled a virtual machine (VM) to be migrated from one physical host to another, but included a pausing of the VM as it was moved.  VMware, who’d had Live Migration, known as VMotion, for some time before this, inevitably picked up on this, and thus branded Hyper-V insufficient for an Enterprise’s needs.  Still, 100+ case studies later, Hyper-V R1 has established a solid foundation, with a low TCO, ready for R2 to take over the baton.

Now R2 is here, and features like Live Migration, Hot-Add of storage, Cluster Shared Volumes, Redirected I/O, and more, are all baked into the product for free, the direction of the argument has changed.  No longer is it about ‘We have Live Migration and you don’t’ – it’s all changed to a ‘We have a lower cost-per-application than you’.  An application being a VM in this case.  There’s a prime example of this argument being presented by VMware, in this video, aptly titled, “VMware slams Microsoft”.  I’ll let you make your mind up on that one.

What’s interesting is, that neither VMware’s, nor Microsoft’s pricing models really reflect a cost-per-app comparison, yet the cost-per-app calculator is being used to prove that VMware vSphere does in fact have a lower cost-per-app than a Hyper-V & System Center combination.

In this post, we’re going to investigate how much of this is actually true…<disclaimer>I work for Microsoft, but that doesn’t mean that I don’t appreciate VMware technologies.  My role is to ensure Microsoft Partners can build successful virtualisation practices around Microsoft virtualisation, and articulate the key values of our virtualisation technologies, so they, the Partner, can provide the right solution for their customers.  This isn’t about slating any of their technologies, as they are the market leader, and in many ways, have defined where virtualisation has got to today.  I am a VCP (not vSphere :-( ).  This post is purely concerned with addressing the cost-per-application calculations.</disclaimer>

So, here we go…

Calculator

Firstly, where can you find the cost-per-app calculator?  Here: http://www.vmware.com/technology/whyvmware/calculator/

Let’s plug in some values:

  1. # of Applications to Virtualiise – 100
  2. Virtualisation Host Type – Server B
  3. Networked Storage Type – iSCSI SAN
  4. Compare to Vendor – Microsoft
  5. VMware vSphere Edition – Enterprise Plus
  6. Physical/Virtual Management – Virtual
  7. Electricity Costs – Average
  8. Datacenter Space Costs – Average

OK, so that’s the values plugged in, what pops out the other side?

“The unique features and architectural design of VMware vSphere 4 allow you to run many more applications per server (higher VM density) at an acceptable level of performance than other virtualization solutions. In our own testing and from reviews of our customers, we have seen VMware vSphere 4 users commonly achieve 50-70% higher VM density per host than with Microsoft Hyper-V, resulting in a 20-30% lower cost per-application.

Based on your inputs, the cost-per-application to virtualize 100 applications using VMware vSphere 4 Enterprise Plus Edition is $2,299 -- 3% lower* than with Microsoft Hyper-V R2 and System Center.”

Let’s just dissect what that is saying for a moment there.  So, vSphere can provide higher levels of density, through the use, predominantly, of Memory Oversubscription/Overcommit (a feature Hyper-V, even in R2, doesn’t have).  Based on this, you’d run more VMs per host, thus you’d need less hosts.  We’ll investigate this more later.  However, look at the last line of the first paragraph.  “…resulting in a 20-30% lower cost-per-application.  This is actually in line with what was said in the video I mentioned, and linked to, earlier.  Also, what’s an acceptable level of performance?  Hmmm!

However, the very next line, based on my inputs, suggests that the saving is actually 3% lower, (as opposed to 20-30%), and notice the little * after “3% lower”?  This is to indicate, and I quote, “Assumes that a VMware ESX server can run 50% more applications than a Microsoft Windows Server 2008 (Hyper-V) host”.  Firstly, where’s R2 gone from the comparison, and secondly, its an assumption that everyone uses Memory Oversubscription, to that level, and thus makes those gains.

What is Memory Oversubscription?

I’m not going to take anything away from VMware here.  What ESX can do with memory management is very impressive indeed.  There’s a great whitepaper that’s just been made available to download discussing the main techniques ESX uses to manage memory, with little impact to performance in most cases.  In a nutshell, Memory Oversubscription is made up of three key mechanisms, Transparent Page Sharing, Ballooning, and Hypervisor Swapping, with the latter being the least desirable, as it can have a considerable performance impact.  For the purpose of this discussion, Memory Oversubscription is a mechanism where, if a VM isn’t using all of it’s allocated RAM, say, it’s using 800MB of it’s 2GB allocation, that remaining 1GB ish, can be returned to, almost like a pool of spare RAM, which can then be used by other VMs.  This can, as the information above states, allow more VMs to run on a host, providing they don’t all need their full allocation of RAM at once.  To simplify, if you had 16GB RAM, you could, potentially, get 16 x 2GB RAM VMs running on that host, with little performance impact, providing that the total working set of memory in the VMs is less than the 16GB RAM.  TPS would obviously help here.  However, if all of those 16 VMs starting to go mad with a craving for RAM, and they all needed their 2GB, you could find yourself in a situation where you are hypervisor swapping, which isn’t, and VMware would accept this, the best place to be.  For more info on Memory Oversubscription, there is a great explanation here.

Do I wish Hyper-V had some of these memory management techniques?  Sure!  If only to give people the opportunity to evaluate, and test it, and see if it is the right fit to use in their environment.  Plus, I could get more VMs on my Shuttle PC :-)

To contrast this, Hyper-V’s approach to memory, at a high level, is more simple and straightforward, in the sense that if you create a VM with 2GB RAM, it consumes 2GB RAM from the physical host, and this doesn’t change, whether the guest OS is using 10% or 90% of it’s 2GB allocation.  In some ways, this is preferable, as you always know what your VMs have got in terms of RAM, and even if they use their full 100%, you know you’re not going to get into a position of hypervisor swapping, so it won’t impact performance of other VMs, however, on the flip side of that, your limits are a lot more hardcoded.  You won’t be able to start that 16th VM, should you already be at capacity with 15 of your 1GB VMs on a 16GB host, even though each of those 15 VMs aren’t using their full 1GB allocation.  So, in summary, it’s swings and roundabouts when it comes to memory management.  There was a post a while back on one of the VMware blogs, which, I accept, was based on VI3, not vSphere, but still highlighted that only 57% of 110 customers surveyed, used memory oversubscription techniques, and 87% of that 57% used it in production, so not everyone is using it, but still, I’d rather have it in my kitbag, than not, but that’s just me.

For the purpose of this cost-per-app discussion however, we’re not going to talk about how many people do, or don’t use it – we’re going to focus on what the cost-per-app calculator states, and that is that you will ‘always’ achieve a 1.5:1 higher ratio (50%) of VMs to host on ESX4, than you will on Hyper-V R2.

Back to the Report…

The very next item in the report, is a graph, which you can view for yourself when you enter your figures, however what I find funny is the heading for the graph: “VMware vSphere 4 can deliver a much lower cost per application than Microsoft's virtualization offerings”  I’ve bolded out much lower there, but reality is, in the very next table:

 Results Table

Let’s take a look at this for a second.  I’ve highlighted that versus Enterprise Plus, the actual savings in terms of cost-per-app is only 3%.  Bit different than 20-30%, but hey, it’s still a saving, right?  Let’s look at the numbers a bit deeper though.  Why do we need 105 VMs, and they need 102?  Well, we said we’d be virtualising 100 VMs, plus the Management technologies and associated database server.  So, with that in mind, we’d need a SQL Box, (as would they), and VMs for System Center Virtual Machine Manager 2008 R2, Operations Manager 2007 R2, Configuration Manager 2007 R2, and Data Protection Manager R2.  Fair enough, I'll give them that.  105 to 102 isn’t the end of the world anyway.  However, where it starts to get interesting is the fact that due, predominantly, to the fact that memory oversubscription is being used, we can only get 12 VMs on a 32GB host, as oppose to 18 (50% higher) on the vSphere side.  This means that we would in fact need 9 hosts, instead of 6, and thus every cost that’s associated with a host, which we’ll discuss in a minute, get’s added on 3 times too.  We now have 3 more hosts’ worth of infrastructure costs, 3 more hosts’ worth of software costs (Windows licenses, Management licenses etc), thus our total costs are higher, thus our cost-per-app is higher, by 3%.  Its important to note that no further information about the ‘VM’ is provided, in terms of CPU, I/O, RAM and so on, so, we’ll just have to work with what we’ve got, apples for apples.

Let’s just take a step back for a minute.  If you had done an accurate capacity plan, and decided to buy some new servers to host these VMs, how many Partners are realistically going to think ‘right, on VMware we’ll need 6 x 32GB RAM Servers, and on MS we’ll need 9 x 32GB Servers, so I’ll go and buy 9 x 32GB RAM Servers’.  Why would you not just double the RAM, and potentially, half the number of servers you’d need on the Hyper-V side?  Simplistically, if 32GB hosts 12 VMs, would 64GB not host 24?  105 / 24 = 4.375 hosts, which, at a push could be 4 hosts, but, we’ll say 5 for headroom.  4 less hosts than before reduces our figures around infrastructure costs quite considerably, along with reduced software costs, and thus, a lower cost-per-app.

The (very) simplistic way to look at this is, whilst vSphere Cost > RAM Cost (and you’ve got room in the server to increase the RAM), cost-per-app will be less on a Hyper-V platform.

You could say, well, what if they are older machines, that don’t have so many RAM slots – if we max out the box in terms of RAM, vSphere is going to have a higher density than Hyper-V.  There is only one answer to that, and it is yes, that’s true. However, a large proportion of customers, when they embrace virtualisation, are pretty much coming from a very physical environment, and want to invest in newer, more powerful and scalable hardware to run these virtual infrastructures.  Working with a Partner, to accurately capacity plan their estate, and thus translating this into the levels of kit they need can be a straightforward exercise, yet choosing the kit to map to this will need to factor in budget, but also things like slots for RAM.  a 32 RAM-Slot server is cheaper to upgrade to 128GB RAM than a 16 RAM-slot server, based on the fact you can buy 32 x 4GB DIMMs cheaper than 16 x 8GB DIMMs, but the 8GB DIMMs will decrease in cost as we move forward.  These are just a few of the important considerations to make, but still, if you had an older server, with 16 slots for RAM (fairly common) which were all filled with 1GB DIMMs – you could take those out, and replace them with 4GB DIMMs, to get to 64GB RAM, for a few thousand dollars, giving that server a new lease of life, and at the same time, maximising your investment in kit.  This obviously assumes that the piece of kit is supported, but also has enough I/O capacity, and CPU grunt, to handle VMs!

As always, I digress!  Point is, if memory oversubscription is a blocker to Hyper-V, for a much lower cost than vSphere, you could increase the RAM significantly, as we’ll explore in more detail now.

Breaking down the Infrastructure Costs.

Firstly, what makes up the infrastructure costs?  Well, if you scroll to the bottom of your results page, and expand Appendix A and B, you’ll find all the info there, but let’s take a deeper look.  Infrastructure costs are made up of Servers/Hardware, but also Power/Cooling etc.  I’m going to put a bit more of a real-life spin on this, but at the same time, I’m going to use a number of VMware’s Calculator Information for Power/Cooling and Datacenter Costs etc.  Bear with me on this one, it will make sense!

Hardware

For this discussion, I’m going to head on over to http://www.Dell.com.  The reason I chose Dell.com, is, because, like the VMware cost-per-app calculator, you can head on over there, for yourself, and try it out.  The use of Dell hardware has nothing to do with it’s performance, scalability, or cost.  It’s purely down to being able to quickly customise a server, and get a price for doing so!

Based on VMware’s Server Profile Assumptions, our chosen server will be:

  • Dell PowerEdge R805
  • 2x Six Core AMD Opteron 2427,2.2GHz, 6x512K Cache, HT3
  • 32GB (16x2GB), 800MHz, Dual Ranked
  • No Operating System
  • 4x Broadcom® NetXtreme II 5708 1GbE Onboard NICs with TOE
    • LOM NICs are TOE, iSCSI Ready (R905/805)
  • 4x Intel PRO 1000PT 1GbE Dual Port NIC, PCIe-4
    • Total NICs = 12
  • 3Yr Basic Hardware Warranty Repair: 5x10 HW-Only, 5x10 NBD Onsite

FYI – based on the rule of thumb for Hyper-V R2, of 1 physical core to 8 vCPUs, we would support 96 VMs with 1 vCPU on a Dual Six-Core Server, or 48 2-vCPU VMs.

Total Server Cost: $3,664

PowerEdge1

What about Storage?  Well, this is the same for both parties, so we’re not going to go into the nitty-gritty about this.  If you look at Appendix A on the cost-per-app calculator, on both sides of the fence, the total comes in at $30,000, and takes into account the number of GB we’d need, and an assumed cost of $3 per GB.

Total Storage Cost: $30,000

What about Networking?  Well, we’ll stick with the VMware pricing of $4,000 per network switch, and a network switch has 24 ports.

Total Networking Cost: $4,000 per switch

What about Power & Cooling costs?  Well, actual operating power is 424 Watts Per Server, according to the calculator, and 530 Watts per Server for cooling, but to keep things simple, I’m going to use the figure of $833 per server in terms of power/cooling costs per year.  This is based on the power/cooling costs of VMware’s 6 hosts, coming in at $5000.  $5000 / 6 = $833 per host.

Total Power/Cooling Cost: $833 per Host

What about Datacenter space?  Well, these are identical for both parties, as they assume that all servers (2U) will fit in a single rack (24U), and that the datacenter space consumed by 1 rack is 7 square feet, which equates to a cost of $2710 for Datacenter space costs.

Total Datacenter Space Cost: $2710

So, what are the scores (George Dawes)?

Well, using our real-life hardware (and admittedly, this wouldn’t be real-life for all organisations, but hey ho):

My Table 1 

As you can see, the infrastructure costs, based on 9 Hyper-V R2 Hosts vs. 6 ESX Hosts, aren’t quite as large as they were in the VMware table shown earlier, yet this is a real example, taken with live server information.  Inevitably, because Hyper-V needs more hosts, we consume more power, we need more networking switches (5 switches at $4000 each, as oppose to 3 on the VMware side), and, inevitably, we need more software licenses, for things like Windows Server Datacenter edition.  You’ll notice on there that our SQL costs are different too.  I’ve chosen to use the Per Processor version of SQL, with 1 CPU’s worth of SQL 2008 Standard Edition retailing at $8998 including SA.  This means we don’t need to worry about CALs for all devices that will be indirectly accessing the SQL box, plus, it means we can use it for more stuff in the infrastructure too.  If we put that SQL Server inside a VM, and provide it with 1 vCPU, and assuming just SCOM, SCCM, SCDPM and SCVMM are using instances within that database, I believe this would be adequate, however, if we take the VMware side, if SQL is being used for anything else in that environment apart from vCenter, they too will have to think about licensing it differently, but for now, I’ll take the hit on the Microsoft side, and leave VMware with just the Standard Edition Server/CAL version of SQL 2008.

The results however, show a pretty much identical cost per application (for those who want to check, it’s the Total Costs / 100 (as this is how many workloads we wanted to virtualise, management workloads aside for both parties).

Let’s take a step back now, and think back to what I wrote earlier:

“If you had done an accurate capacity plan, and decided to buy some new servers to host these VMs, how many Partners are realistically going to think ‘right, on VMware we’ll need 6 x 32GB RAM Servers, and on MS we’ll need 9 x 32GB Servers, so I’ll go and buy 9 x 32GB RAM Servers’.  Why would you not just double the RAM, and potentially, half the number of servers you’d need on the Hyper-V side?  Simplistically, if 32GB hosts 12 VMs, would 64GB not host 24?  105 / 24 = 4.375 hosts, which, at a push could be 4 hosts, but, we’ll say 5 for headroom.  4 less hosts than before reduces our figures around infrastructure costs quite considerably, along with reduced software costs, and thus, a lower cost-per-app.”

Lets perform the same comparison, but this time, we’ll pay to double the RAM for the Hyper-V hosts, to offset the value that memory oversubscription provides.  This time, the same server will be stacked with 64GB (16x4GB), 800MHz, Dual Ranked RAM.  Let’s look at the difference in cost:

PowerEdge 2 PowerEdge 3

WOW.  $889 per server to double the RAM to 64GB.  Yes, the leap to 128GB on this server is an extra $9590 per server, but that’s because we’d need to use the 8GB DIMMs, as I explained earlier.  That is where you’d perhaps think about moving from a 2U to a 4U, with 32 DIMM slots, to enable reaching 128GB for much less than $9590 more.

Anyway, what does this do to our calculations?  Assuming we go down to 5 Hyper-V hosts (I’m sure we could do 4, but hey, we’ll stick with 5 for this discussion):

My Table 2

As you can see, things have changed quite considerably.  By doubling the RAM on the Hyper-V side, for an extra $889 per host, we’ve actually been able to reduce our number of hosts to 5 (this could have been 4), which has resulted in less hardware, networking switches, power, cooling, and software licenses.

Those eagle-eyed among you are bound to say, well why wouldn’t I just buy 64GB RAM for my VMware vSphere hosts, and reduce my hosts down still further?  To that I say, absolutely!  You could in fact do that, and we’d be back to the start again, where we’d have 5 nodes, vSphere would be running on, say, 3 nodes (assuming around a 36:1 consolidation ratio), and the table would look something like this:

My Table 3

You can see that this time vSphere provides an 8% cost-per-app advantage over a Hyper-V and System Center combination, yet if we take the 43% of customers who don’t use memory oversubscription, from the survey earlier, those customers are actually 23% worse off.  Looking at the first vSphere Enterprise Plus column, you can see now that the hosts are running at 36:1.  Not every customer is going to be comfortable with running at 36:1, whether it is supported or not.  One of Microsoft’s biggest public case studies is Kroll Factual Data, and they are running at around 30:1, with no memory oversubscription, on Hyper-V R1, and it’s enabled them to reduce their servers from 650+ to 22, cutting their power bill by 90%.  Most customers and partners that I speak to, typically go in around the 10-15:1, maybe a little higher on occasion.  Plus, at 36:1, what this example doesn’t take into account is the level of I/O that will be going on at this stage, and also the CPU utilisation.  To feel comfortable, you may choose to move to a 4-way Server, with 4 x Quad Core processors, to give you that additional scalability, but remember, as soon as you add more processors, you’ll need to double your vSphere license costs, and also your System Center Suite costs on the Microsoft side. I hope you appreciate, that at this stage, there is a lot of “if’s, but’s and maybe’s”.

We could keep going round in circles, adding RAM to one, working out the figures, then repeating for the other.  The whole point I’m trying to make around this cost-per-app calculator, is it’s not as black and white as it first appears. When you start to delve into the numbers, it's clear to see that if you’re clever with your hardware selections, and your calculations are accurate for your environment, based on strong capacity planning information, with Hyper-V and System Center, you can save a considerable amount of money.

Assume an organisation has already got Windows licenses that they can transfer into the environment, and what are you left with?  vSphere & vCenter costs, vs. Hyper-V and System Center costs, which, as you’ll see from the final column in the table, vSphere license costs for a 5-node environment come in at $61,302, whereas Hyper-V and System Center come in at $19,043, and that’s with the more expensive SQL, which may also be needed on the VMware side.  Clear saving of 2/3rds of the cost.  That’s upfront costs, in the customer’s pocket, plus, with it being under an agreement, chances are it will be paid off over the 2 years, rather than upfront, which means you’ll get the returns, before you’ve fully paid for the software!

“But you’re getting so much more functionality with vSphere”

Judging by this table, yes!!

VMware Table 1

So, thankfully, we’re given the benefit of the doubt when it comes to ‘Single Server Provisioning’, i.e. you can run virtual machines, but straight away, we lose a tick for High VM Density.  This is subjective.  For me, 24:1, or even 30:1 as Kroll Factual Data deployed, is a High VM Density, so, if you’re going to use generalist terms, then as long as the hypervisor can support large numbers of VMs (regardless of how much you upgrade the RAM), then I can have a tick in that box.  If on the other hand, you use a more technical way of describing the option, i.e. ‘Has Memory Oversubscription’, of which not everyone will understand, then yes, I accept that there shouldn’t be a tick there.

On to Clustered File System.  Hmm.  Does it matter that we don’t have a ‘clustered file system’, when in fact, we have a non-proprietary Cluster Shared Volume system, which actually out scales VMFS-3 from 2TB, up to 256TB, and has load-balanced MPIO, for FC and iSCSI, built in, for free.  Just have a read about what Jason Perlow of ZDNet thinks of VMFS-3

Ultra-thin Hypervisor?  I’ll let you read Jeff Woolsey’s posts on the Hypervisor Footprint Debate:

- Part 1
- Part 1 Update
-
Part 2
-
Part 3

Both systems provide the capability for automated failover, with VMware HA on the VMware side, and Failover Clustering on the Microsoft side, so that’s a tick in the box for both there, yet we lose a tick for online/offline VM patching. I assume by patching, they are referring to Update Manager.  Well, with System Center Configuration Manager 2007 R2, we have one of the most scalable, mature patch/update/app/OS deployment tools on the market, that can also be used to deploy patches/updates/apps/OSs out to the physical desktop estate too, which is something that Update Manager can’t address.  It’ll also help you manage licenses through the Asset Intelligence feature, and deploy desired baselines out to OS’s, physical or virtual.  It also integrates with App-V, and Intel vPro, to name but a few.  These are just a few of the many features that make SCCM much more than just a ‘patching tool’.  To handle offline patching, we have a free bolt on tool for SCCM/SCVMM, called the Offline Virtual Machine Servicing Tool, so we deserve a tick there.

No integrated disk backup?  How about Data Protection Manager 2007, which we’ve paid for as part of our suite?  This will back up not only the VM (like Data Recovery on the VMware side), but also the key workloads inside the VM, like SharePoint, Exchange, SQL, Windows, File Servers, System State etc (unlike Data Recovery).  The stuff that matters.  It also has a single instance storage capability, which can help save on storage space.  It can backup to tape, which Data Recovery can’t, plus it can protect into the cloud.  We’ll have a tick for that, thanks very much.

Storage Thin Provisioning – do you mean dynamic disks, that have been in Hyper-V, and even in Virtual Server, for years?  Well, that’s in Hyper-V, and guess what, if you’ve got System Center Operations Manager, you can monitor it!  Alert on it!  Create custom actions on it!  Tick, thank you!

Hot Add is an interesting one, with both parties providing the capability to hot-add/remove storage, however when you start to look at the capability of hot-adding / removing CPU’s and memory, I’d strongly recommend you give Jason’s blog post a read…I agree wholeheartedly with Jason’s comment (in the comments section) “I think it’s going to be quite a while before change management is ready for hot add CPU or memory”.

On to Live Migration, of which we can perform 1 LM, between 2 nodes, at once.  Does this mean that VMware can perform 2 simultaneous Live Migrations at the same time, quicker than we can do 2 separate ones?  I doubt it, but does it matter? Live Migration gives the IT Admin more flexibility than they’ve ever had before, and the fact that whether the LM takes 1 minute, or 30 seconds, the OS/User is unaware this is happening to the VM which makes this point much less relevant.  The fact that we include it, for free, opens this feature up to organisations of all shapes and sizes.

Fault Tolerance – this is an interesting one.  We don’t have an answer to this right now, but we’re partnering with Marathon to provide one.  I’d strongly suggest you have a look here.  The list of caveats for this feature alone is, well, long.  Some of my favourites include no DRS for those protected VMs, 1 vCPU for protected VMs, no Thin Provisioning, no Storage VMotion, No ballooning…

Firewall Virtual Appliance – nope, not a direct apples for apples comparison from MS on that one, although I’m sure some clever chap could come up with some kind of ISA-like appliance to do a similar job, but time will tell on that one.

Storage Live Migration – SCVMM 2008 R2 provides a very similar capability, albeit pausing the VM whilst the migration is taking place.  A handy feature, but not used as much as regular Live Migration.  Half a tick for us on that one I guess.

Dynamic Resource Scheduling – I think you’ll find that SCVMM & SCOM combined provide you with Performance & Resource Optimisation, a.k.a. PRO.  Not as easy to set up as DRS, but a great deal more extensible, through the authoring of PRO-enabled Management Packs.  Tick in the box there, thanks.

Power Management – I was with a VMware Partner the other day, who, in front of an audience, said “We’ve not actually seen anyone deploy DPM' – having their servers shut-off scares them!”.  Windows Server 2008 R2 in general has a much better grasp on Power than previous versions, and consumption is thus reduced, however if you wanted to achieve a like for like powering down of servers, with SCOM, and a bit of scripting, I’m sure it’s something that could be achieved.

Host Profiles – I still can’t get over the fact that Host Profiles is an Enterprise+ feature only.  In a nutshell, Host profiles streamlines the configuration of new ESX hosts, and makes it easier for administrators to check for compliance.  It won’t actually deploy ESX onto the bare metal for you, you’ll still have to do that manually, yet once deployed, you can quickly make the ESX hosts the same.  Compare that with SCCM however, where you can define a gold task sequence, deploy the OS out to blank hardware, with zero-touch from the end user, then once it’s booted, continue to update this host centrally, from the same place, with desired configuration baselines.  Combine this with Group Policy, and you’ve got an incredibly centralised, scalable deployment and management mechanism.

Distributed Network Switch – This is a pretty powerful capability, that brings a much greater granularity around networking, and specifically allows 3rd party integration from vendors like Cisco, with their Nexus 1000V.  At this time, I’ll be honest, Microsoft doesn’t have an apples-for-apples comparison with this capability.  Sure Hyper-V networking supports vLANs, Jumbo Frames, TOE VMq and more, but if you need the Cisco integration, you’ll have to go for Enterprise+, then add on the Nexus 1000v too.

So what would my version of events look like?

My Table 4

Slightly different, yet you see that VMware’s vSphere starts to fall down where management is concerned.  I don’t mean management of the black box that is the VM – vCenter is a prime example of a well executed technology that does exactly what it says on the tin, however, customers need more than this.  It’s great to virtualise workloads like Exchange, or SQL, but without a monitoring technology, monitoring at the workload level, how does anyone know what’s going on, and how can they show this back to the business?  Also, the whole VMware management ethos, is to manage virtual machines, but what about the stuff that isn’t virtual?  Do you need a separate set of tools to manage these too?  Do I go and invest in a Symantec for backup, and deploy Altiris for for my patch/update deployment, and use OpenManage to monitor my physical servers?  All these extras add up to what System Center is providing out of the box.  VMware are very strong in the virtualisation space, with mature, and performant technologies, but they know all too well that once virtualised, there is more to an estate, and customers are now looking to optimise their infrastructure, with OS Change Management, centralised deployment methodologies, patch control and more, and this is what System Center has been doing for a long time, and with a strong hypervisor offering to boot, Microsoft is in a strong position to offer maximum value and big ROI to customers.  It’s not about being cheaper than VMware – it’s about offering the right level of value, to maximise current and future investments in technology.

Summary

Blimey this was a long post.  If you’ve followed it to the end, well done.  If you haven’t, you won’t know I’m writing this!  I guess what I tried to demonstrate in this post, is you shouldn’t always believe what you read on either VMware’s, or Microsoft’s website.  We’re both guilty of the ‘marketing’ stuff, yet when you sit down, with a Partner, and thrash through all this stuff, it starts to become clear, and that's what I hope I’ve provided here today.  Anyone who knows me, knows I’m passionate about what I do, and wear my heart on my sleeve when it comes to our technologies.  I’m the first to admit when I think someone has something better than we do – I think I demonstrated that earlier when talking about Memory Oversubscription.  I’m a fan of the technologies on both sides of the virtualisation fence, but do I think virtualisation is where the battle will be won?  No.  Will it be won on the management front?  Yes.  That’s the angle Microsoft is coming from.  That’s what I’m talking to Partners about every day of the week.  If you’re a Microsoft Partner, and you can deploy a System Center project, encompassing all 4 of those technologies, not only are you heavily optimising a customers’ infrastructure, but the number of consultancy days will be significantly more than they would have been for a ‘virtualisation project’, yet at the same time, the overall cost to the customer (licensing + consultancy) in most cases, will be less than just the vSphere licensing.  Partner wins with services revenue (big margins), and customer wins with a trusted advisor (Mr Partner) and an optimised, well managed, “physical, virtual, desktop to datacenter” estate.

All that’s left to say is, thanks for reading, feedback welcome, and have a great weekend!



Comments
  • Hi Matt, very very long post but worth the read!

    Now that Hyper-V R2 has Live Migration and HA, next big thing is memory over commitment feature I guess ;)

    I'm not totally agree with you on the management angle. It is not because mgmt tools are coming form a single vendor, i.e. Microsoft, that they are better or integrate better within each other. All under the same hood introduces a wrong security feeling which is not good IMO.  

    Keep on with the good post!

    Cheers,

    Didier

  • Hi Didier,

    Thanks for your comment!

    Who knows what the next feature list will look like for Hyper-V, and how long it will take!  That is for much more important people than me to decide!

    With regards to management, I guess from a Microsoft perspective, we'd like to think that our tools work well together, and provide great value, but, as you say, they don't necessarily have to be from the same vendor, as long as they provide the functionality YOU need, for YOUR business.  I'm not sure whether a single vendor introduces a wrong security feeling, but I agree with you about multiple vendors!  It's all about what is the right fit, at the right budget.

    I think what we have to agree on though, is how important management tools are, regardless of the vendor, in an infrastructure.

    Thanks for reading!

    Matt

  • Cracking post Matt, must've taken a little while to put that together :-)

    It does a great job getting through some of the FUD that VMWare have put out there-will definitely come in handy in various conversations!

    Cheers

    Rich

  • Thanks Rich - took me a fair while to write, but glad it will come in handy!

    Microsoft can be just as guilty on the FUD side on occasion, but (i hope) you won't see any of it on this blog!

    Thanks!

    Matt

  • Matt,

    Great article.

    If only more Microsoft employees were like you.

    Your article should be required reading every MS virtualization person. Curious as to what Jeff Woolsey thinks of it...

    One point you make while recognizing the value of memory overcommitment is that overcommitment assumes all the VMs aren't going to want all of their RAM at the same time and break the benefits. Well that's the nice thing about DRS, VMs will be migrated off to get things under control again. I doubt all VMs across your whole datacenter will want all their RAM at the same time.

  • Hi Shawn,

    Thank you for taking the time to comment, and also for the flattering feedback.  I couldn't say what Jeff thinks of it - not sure he reads my blog!

    With regards to the memory overcommit - you are absolutely right, and I apologise wholeheartedly for overlooking this factor.  If you're using DRS, there a much lower chance of running into the degraded performance due to hypervisor swapping.  I guess if you've got the Advanced SKU or lower, then this will apply more, but, as you rightly state, anyone with Enterprise/Enterprise+ will have DRS, so should be in a better position :-)

    Thanks for the comment - this is the kind of thing I'm looking for!

    Matt

  • this is a great article and definitly a reference for arguments ;)

    just to make sure i get the full picture,i got a couple of questions

    1-what is the hidden costs on customizing vSphere or system center products to actually fit the customer day-to-day activities ,my guts is telling me this is not free in both

    2-what is the alternative for Hyper-V customers to use for Network/firewall management ,how costy is that?

    in your opinion who is getting closer to the full dynamic datacenter vision ,how many years will it take to get there ?

    Ahmed

  • Hi Ahmed,

    1) - Your first question - well, this would depend solely on what the customer's day to day activities are!  I imagine that, out of the box, the System Center suite, with it's 4 key technologies, will provide the capabilities to perform nearly all of the typical day to day activities, however, I would expect an IT Admin within a customer to spend the majority of their time in SCOM, as this allows them to monitor all the other components of their System Center environment, but at the same time, SCOM provides customised views if you have a 'Database guy' or an 'Email guy', they can just see the bits that matter to them.  It's an extremely extendable technology, which really can be tailored to the customer's infrastructure.  I'd strongly suggest you have a watch of this video: http://www.youtube.com/user/duanebms - they are monitoring a custom web app called PetShop, and the video shows how the customer can monitor, and react to changes during the day - very powerful.

    2) - for Network/Firewall management - it depends on what you're looking to do.  Windows Servers, from 2008 onwards, have a very granular bidirectional firewall, which can be configured centrally via Group Policy, so you could gain a strong level of protection just from this, and if you combine this with another in-box feature of 2008/R2 - Network Access Protection (NAP), this will effectively help to quarantine non-compliant OS's from the network, so for instance, if you have an OS that is out of date with patches, and you've started it up, rather than expose it to the full network, you can bring it up, the NAP server will pick up that it's out of date, and it will be quarantined immediately.  Again, this is centrally configured.  There's obviously nothing stopping a customer using a firewall they may have already invested in too.

    3) - Dynamic Datacenter - Hmm, great question, and you know what, I would say it's pretty close, but from different angles.  VMware have a more feature-rich, tick for tick, virtualisation platform than anyone else, however, we have a much more comprehensive, overall management platform than VMware, with System Center.  Does either orgnisation truly know what their Dynamic Datacenter would look like?  Possibly not, but in my opinion, System Center gives Microsoft an edge, because it focuses on the 1 area that is most important in the datacenter, and that is the workloads, whereas VMware, as of yet, predominantly focus on what you can do with the VMs.  Obviously there is plenty of room for debate on that question, and 2 people's Dynamic Datacenters could look completely different, but I think both organisations are in a strong place, but for me System Center has some distinct, relevant advantages, but that's just me!

    Hope that helps!

    Matt

  • >as they are the market leader

    Humm I wonder for how long..

  • This is a great post and good to see someone pointing out some of the myths surrounding memory overcommit. When we were evaulating Hyper-V R2 (beta at the time) and ESX it was immediately obvious that we could double the RAM in the Hyper-V hosts against paying 3 times more for VMWare overcommit.

  • Hi Phil,

    Thanks for the comment.  I'm glad you enjoyed the post, and found it useful.  It would be harsh of me to position memory oversubscription as a myth as such, but, in most (but not all) cases, it can be overcome by adding more RAM, however, I acknowledge that this isn't always possible.

    For the mid-market and small enterprise Customers I tend to speak to, they rarely need to hit the 'hardware limit' i.e. the max supported RAM of the server, so they always have that opportunity to increase their scalability as and when they need to.  Combined with the low entry costs of a Microsoft solution, they tend to achieve what they set out to do, with a strong ROI.

    There's no denying memory oversubscription, through the different techniques it utilises, is incredibly clever, and powerful, but not everyone will use it to gain greater levels of scalability - moreso as a safety net.

    Thanks again!

    Matt

  • HI Matt,

    Excellent article and give me  more insight about the comparing both product. I’m doing a research on ROI for SQL Server 2008 with VM or multiple instance on Windows 2008 with resource governor.

    Have you ever thought about the comparison of multiple SQL instances on one physical server vs SQL server on VM or Hyper-V.?

  • Hi Mihir

    Thanks for the kind words.

    Comparing SQL in VMs and Multiple SQL instances isn't something I would typically compare (I'm 100% not a SQL guy! ;-)) although I believe this would be helpful for you: http://technet.microsoft.com/en-us/library/dd557540.aspx

    Thanks!

    Matt

  • Hi Didier.

    I desagree with you post, I'm employed of a Microsoft Gold Certified Partner. MCSE, MCITP, CCNA and VCP but VMware is most powerfull virtualization infrastructure than hyper-V.

    I was making many labs comparing Hypervisors and VM in VMware have more performance than other virtualizations solutions.

    1. VMware have more experience in virtualization solutions.

    2. VMware need less disk, RAM and CPU for operating system.

    3. VMware have a better manage of hardware resourse.

    4. If you add more RAM in a Hyper-V host them need add more CPU in each additional VM; it mean that you have to buy more procesor.

    5. VMware hypervisor it's more secure that hyper-v.

    Best regars,

    Jairo Cetina

Your comment has been posted.   Close
Thank you, your comment requires moderation so it may take a while to appear.   Close
Leave a Comment