Information and announcements from Program Managers, Product Managers, Developers and Testers in the Microsoft Virtualization team.
We just completed a great week at MMS 2011 in Las Vegas. To say it was a busy week would be a huge understatement. To everyone that attended, our sincere thanks.
I spoke to a lot of folks at the show and the feedback was overwhelmingly positive. Whether it was the announcements for:
…or the fact that every product in the System Center portfolio is being revved this year, everyone’s excited to see what the System Center 2012 releases have to offer. As usual, the hands-on labs and instructor-led labs continue to be some of the most popular offerings at MMS. MMS Labs offer folks the opportunity to kick the tires on all of the existing and newly released and Beta products. As usual the lines started early.
MMS 2010: Quick Refresher
For the second year in a row, all of the MMS Labs were 100% virtualized using Windows Server 2008 R2 Hyper-V and managed via System Center by our partners at XB Velocity and using HP servers and storage. MMS 2010 was the first year all of the labs were provided via virtualization. In previous years, the MMS Labs were all delivered using physical servers. To say moving from physical to virtual was a huge success would be an understatement. Here are a few apposite stats comparing MMS 2009 to MMS 2010 last year:
Power reduction of 13.9x on the servers:
Power reduction of 6.3x on the clients:
Finally, a total of 40,000 VMs were delivered over the course of MMS 2010 on 3 racks of servers. (Technically, it was 6 half racks, but since we used full racks this time, I’m calling it 3 racks so we’re making an apples to apples comparison…)
MMS 2010 Labs went so smoothly, that a similar setup was used for TechEd 2010, which performed just as well. After setting the bar so high, the team eagerly took on the challenge of improving on last year with MMS 2011. Specifically,
MMS 2011: Servers
Last year, we used HP ProLiant DL380 G6 Rack Servers. This year we decided to use HP BL460c G7 Blades in a c7000 enclosure. Moving to HP’s BladeSystem allowed us to:
From a memory standpoint, each blade was populated with 128 GB of memory the same as in each rack server last year. However, since we were using fewer servers this year (32 this year versus 41 last year) the total memory was reduced by over 1 Terabyte. At the same time, we delivered more labs running more virtual machines than ever.
>> By using Windows Server 2008 R2 SP1 Dynamic Memory, we were able to reduce the physical memory footprint by over 1 Terabyte and still deliver more labs running more virtual machines than ever. That’s a saving of ~$80,000. <<
Hyper-V Dynamic Memory rocks!
By making these changes, the team reduced the number of racks from 3 to 2. Here’s the side-by-side comparison of MMS 2010 versus MMS 2011 from a server standpoint:
You can see that across the board and in every possible metric, the MMS 2011 servers are a significant improvement over last year. The systems are more powerful, offer greater scalability, improved performance, reduced power consumption, and fewer cables to manage; and they reduced the physical footprint by a third.
MMS 2011: Storage
Last year the team used local disks in every server. This year, they decided to change their storage strategy. Here’s what they did.
This new storage strategy resulted in massive improvements. Using the HP I/O Accelerator Cards, total IOPS performance improved by ~23,600% (no, that’s not a typo) and using the SAN allowed the team to centrally manage and share master virtual machines; every blade was a target for every lab from every seat at MMS. This strategy provided an unprecedented amount of flexibility. If we needed an extra 20 Configuration Manager labs from 1:00-2:00 and then needed to switch those to Virtual Machine Manager labs from 2:00-3:00 or Operations Manager labs from 3:00-4:00 we could. That is the flexibility of private cloud.
Here’s the side-by-side comparison of MMS 2010 versus MMS 2011 from a storage standpoint:
The results were simply jaw-dropping.
>> On two racks of servers, we were able to provision 1600 VMs in three minutes or about 530 VMs per minute. <<
MMS 2011: Time for the Diagrams and Pictures
Here’s a picture of the two racks powering all of the MMS 2011 Labs. You can see them behind the Plexiglas. What you don’t see are the crowds gathered around pointing, snapping pictures, and gazing longingly…
Here’s a diagram of the rack with the front of the rack on the left and the back of the rack on the right. The blue lines are network cables and orange lines are fiber channel. Remember, last year we had 82 network cables; this year a total of 12 cables, 8 for Ethernet and 4 for Fiber Channel.
MMS 2011: Management with System Center. Naturally, the MMS team used System Center to manage all the labs, specifically Operations Manager, Virtual Machine Manager, Configuration Manager, and Service Manager.
Operations Manager 2012 Pre-Release was used to monitor the health and performance of all the Hyper-V labs running Windows and Linux. To monitor health proactively, we used the ProLiant and BladeSystem Management Packs for System Center Operations Manager. The HP Management Packs expose the native management capabilities through Operations Manager such as:
It looks like this:
In terms of hardware, System Center had its own dedicated hardware. System Center was deployed in virtual machines on a Hyper-V three-node cluster for HA and Live Migration if needed. (It wasn’t.) Networking was 1 Gb/E and teamed for redundancy. For storage, iSCSI over 1 Gb/E was used with multi-path I/O and the SAN was provided by the HP Virtual SAN Appliance (VSA) running within a Hyper-V virtual machine.
MMS 2011: More Data
Here’s more data…
One cool application that the Lab team wrote is called Hyper-V Mosaic. Hyper-V Mosaic is a simple application that displays thumbnails of all running virtual machines. The screenshot below was taken at 2 PM Wed March 23. At the time, 1154 VMs were running on the 32 Hyper-V servers. The mosaic display is intended to provide the attendees with a sense of scaling of the Private Cloud solution. All of the thumbnails are live and updating. (More on Hyper-V Mosaic below…)
Here’s a screenshot:
MMS 2011: Let’s Take this to 11
After a few days of running thousands of VMs in hundreds labs without issue and seeing that the hardware wasn’t being taxed, the team was very curious to see how just how many virtual machines they could provision. So, one night after the labs were closed the team decided to see how many VMs they could run…
Here’s a screen shot from PerfMon:
MMS: Physical Footprint Over the Years…
In terms of physical footprint, the team was allocated 500 sq. feet for MMS 2011 Labs and needed only 17 sq. feet. Here’s how the footprint has dropped in the last three years:
MMS 2011: Success!
As you can see that across the board and in every possible metric, the MMS 2011 system was a significant improvement over last year. It’s more powerful, offers greater scalability, improved performance, reduced power consumption, fewer cables to manage and used a third less physical footprint.
From a Windows Server Hyper-V standpoint, Hyper-V has been in the market three years and this is just another example of how rock-solid, robust and scalable it performs. Hyper-V Dynamic Memory was a huge win for a variety of reasons:
From a management perspective, System Center was the heart of the system providing health monitoring, ensuring consistent hardware configuration and providing the automation that makes a lab this complex successful. At its busiest, over 2600 virtual machines had to be provisioned in less than 10 minutes. You simply can’t work at this scale without automation.
From a hardware standpoint, the HP BladeSystem Matrix is simply exceptional. We didn’t fully max out the system in terms of logical processors, memory, I/O Acceleration and even at peak load running 2000+ virtual machines, we weren’t taxing the system. Not even close. Furthermore, the fact that HP integrates with Operations Manager, Configuration Manager and Virtual Machine Manager provides incredible cohesion between systems management and hardware. If you’re looking for a private cloud solution, be sure to give the HP Cloud Foundation for Hyper-V a serious look. Watch the video where Scott Farrand, VP of Platform Software for HP, talks about how real HP and Microsoft are making private cloud computing.
Finally, I’d like to thank our MMS 2010 Platinum sponsor, HP, for their exceptional hardware and support. The HP team was extremely helpful and busy answering questions from onlookers at the lab all week. I have no idea how we’re going to top this.
Jeff WoolseyGroup Program Manager, VirtualizationWindows Server & Cloud
P.S. More pictures below…
Here’s a close up of one of the racks:
HP knew there was going to be a lot of interest, so they created full size cardboard replicas diagraming the hardware in use. Here’s the front:
…and here’s the back…
During the show, there was a huge display (made up of a 3x3 grid of LCDs). This display was located at the top of the elevator going from the first to the second floor at the Mandalay Bay Convention Center. Throughout the week it was used for messaging and hot items of the day. On the last day, the event switched the big display screen at the top of the elevator over to show the Hyper-V Mosaic display. This turned out to be a huge hit. People came up the elevator, stopped, stared and took pictures of the display screen. The only problem is that we inadvertently created a traffic jam at the top of the elevators. Here’s the picture:
Is Hyper-V Mosaic something that will be released?
Much like the Azure Container, which I'm told is due any day now, I would like one of these for my garage please. How much would something like this set a guy back?
Nice! Will you be offering up a technical paper on this? More of the nuts and bolts of what was done?
What a waste of space! 2 x 36u racks they could have gone 1x42!! pfff and they are talking 'bout some great savings!!!
What is powering CommNet? Performance issues being reported everywhere.
To answer Giotis' comment.
You are absolutely right, we can pack all the gear in 1x42U rack. However, we decided to go with 2 racks, just in case we "lost" 1 rack, we can still run the whole HOLs in 1 rack. Redundancy is key in this design when we need to think about all possible scenarios.
This is a great example of how to combine top technologies (Microsoft and HP) into one solution where 1+1 is much greater then 2.
Great example of HP coughing up more cash than anyone else to get their kit displayed
its great solution for anywere company!
In terms of cost of equipment, how much does all this hardware cost compared to last year?