MMS 2010 Labs: Powered by Hyper-V, System Center & HP...

MMS 2010 Labs: Powered by Hyper-V, System Center & HP...

  • Comments 7
  • Likes

(Pardon the interruption on the Dynamic Memory blogs, but I was busy at MMS 2010 and needed to blog this content. I'll have more on DM soon.)

Virtualization Nation,

We just wrapped up Microsoft Management Summit 2010 (MMS) in Las Vegas. MMS is the premier event of the year for deep technical information and training on the latest IT Management solutions from Microsoft, Partners, and Industry Experts. MMS 2010 was a huge success from a number of standpoints. Starting with sold out attendance (and attendance was way up from last year), compelling keynotes from Bob Muglia and Brad Anderson to the release of new products including:

...there was something for everyone. Furthermore, many folks were very pleased to learn that Opalis (our datacenter orchestration and automation platform) was joining the System Center family. Why is this a big deal? Well, for our customers who have purchased the System Center Datacenter Suite license it means that Opalis is now included in the System Center Suite. That's customer focus. You can find out more about Opalis here.

There was a lot to experience at MMS, but I'd like to focus on something you may or may not have heard about...

MMS 2010 Labs

One of the most popular activities at MMS are the MMS Labs. The MMS Labs are very busy and constantly booked. These advanced, usually multi-server, labs are created and configured to walk IT professionals through a variety of tasks such as:

  • Introducing new products (Service Manager Labs were very popular this year)
  • Exploring new product features
  • Advanced topics, automation, best practices, tips and tricks
  • and much more...

In past MMS events, virtualization has been used throughout in a variety of ways and to varying degrees, but this year the team decided to move to an entirely Hyper-V infrastructure. In short:


.and the results in terms of manageability, flexibility, power usage, shipping costs and more are staggering. What do I means, let's start with this factoid:

>>  The MMS 2010 Labs delivered ~40,000 Hyper-V VMs for ~80 different labs in 5 days on just 41 physical servers  <<

No, that's not a typo, that's just the tip of the iceberg.

>>  In the past for MMS, we shipped 36 RACKS,

~570 servers, to host MMS labs.

For MMS 2010, we shipped 3 RACKS.

Yes, 3 RACKS with 41 servers.

(Ok, technically, 6 half racks because they're easier to ship.)  <<

So, what were these racks filled with?

Servers. All servers were configured identically as follows:

  • 41 HP Proliant DL380G6 servers (Dual socket, quad-core, Nehalem Processors with SMT, 16 LPs per system) each configured with 128 GB of memory and 4 300 GB SAS drive of local storage striped (no SANs were used), These servers were simply incredible. Performance, expandability, performance.
  • All networking was 1 Gb/E switched (no 10 Gb/E) and demonstrates the efficiency of Remote Desktop Protocol (RDP). Even with hundreds of labs going on simultaneously, network bandwidth was never an issue on 1 Gb/E
  • Windows Server 2008 R2 Hyper-V and System Center
  • Virtual machines were configured on average with 3-4 GB of memory each and the majority of labs used multiple VMs per lab.

The power draw on each server when fully loaded was about 200 watts. Maximum power draw for the 41 servers was 8,200 watts. If we do a broad comparison against our previous 570 servers (assuming a similar power draw) the comparison looks like this:

  • 570 servers * 200 watts per server = 114,000 watts
  • 41 server * 200 watts per server = 8,200 watts

>>  Power reduction of 13.9x on the servers.  << clip_image001 Figure 1: HP DL380G6 & Hyper-V R2 Rock Solid

Rich vs. Thin Clients. On the client side, MMS historically uses rich clients at each station averaging about 120 watts per system. For MMS 2010, thin clients running Windows Embedded 7 (deployed using Windows Deployment Services) were used with each one averaging about 19 watts each. From a power standpoint the comparison looks like this:

  • Rich clients: 650 clients * 120 watts per client = 78,000 watts
  • Thin clients: 650 clients * 19 watts per client = 12,350 watts

>>  Power reduction of 6.3x on the clients.  <<

Shipping. From a shipping standpoint, thin clients are smaller and weigh less than traditional rich clients.

>>  In the past for MMS, we shipped 650 rich clients for MMS labs. From a shipping standpoint this meant about 20 desktops per pallet and a total of ~32 pallets. For MMS 2010, thin clients were used and we were able to ship 650 thin clients on 3 pallets. Using thin clients instead of rich clients allowed us to use one less semi for shipping to MMS.  <<


That's right, one less semi for shipping labs to MMS. In addition to one less truck, let's not gloss over the savings in terms of the manpower having to lift and carry 650 50 pound workstations...

Manageability. Before the show began, there was some initial concern that there was only 15 minutes between lab sessions and would that be enough time to reset >400 labs (about ~1300 VMs) over 41 servers in 15 minutes? Resetting the labs means reverting the VMs to previous states based on Hyper-V virtual machine differencing disks and snapshots. The initial concern turned out to be totally unwarranted as resetting the full lab environment only needed 5 minutes giving the team a full 10 minutes to grin from ear to ear, er, I mean "diligently manage lab operations." :-) Speaking of System Center.

System Center. Naturally, the MMS team used System Center to manage all the labs, specifically Operations Manager, Virtual Machine Manager and Configuration Manager.

  • Operations Manager 2007 R2 was used to monitor the health and performance of all the Hyper-V labs running Windows & Linux.
  • Configuration Manager 2007 R2 was to ensure that all of the host systems were configured in a uniform, consistent manner via Desired Configuration Management (DCM)
  • Virtual Machine Manager 2008 R2 was used to provision and manage the entire virtualized lab delivery infrastructure and monitor and report on all the virtual machines in the system.

Flexibility. Finally, due to overwhelming popularity of the Service Manager labs, the MMS team wanted to add more Service Manager labs to meet the increased demand. With Hyper-V & System Center, the team was able to easily create a few dozen more Service Manager labs to meet the demand on the fly. In short, MMS 2010 Labs was a huge success.

Finally, I'd like to thank our platinum sponsor, HP, for their support and the tremendous hardware.

Cheers, -Jeff

P.S. I've included a few screenshots below.


clip_image004      clip_image005

Pictures Anyone?

Here's one of the 6 Hyper-V half racks...

MMS 2010 Half Rack

Figure 2: Brings a tear to me eye...

Here's a picture of all the Hyper-V hosts. Over 40,000 MMS Labs were served from 6 half racks of servers.

MMS 2010 Lab Servers

Figure 3: MMS 2010 Lab Servers...


Operations Manager 2007 R2 Dashboard View for all of the labs.

Operations Manager Monitoring


Operations Manager 2007 R2 more detailed view providing end-to-end management; managing the hardware, the parent partition and the apps running within the guests.

Ops Manager VM Monitoring


clip_image011      clip_image012

Your comment has been posted.   Close
Thank you, your comment requires moderation so it may take a while to appear.   Close
Leave a Comment
  • I was at MMS and was very impressed with the Lab setup, however the labs were very unresponsive.  That did not make a whole lot of sense until I ran a perfmon and it indicated a disk bottleneck(1-3 Mbytes/sec max).  Since you are not using a san I have no idea why this occurred.   Just thought you should check it out but I got a kick out of the dashboards you had running!  

  • Hi aknab. Thanks for bringing this up. You are absolutely right!

    My team (XB Velocity) set up the lab system at MMS 2010. One of the things that we noticed is that for during some instructor-led labs, high-disk intensity activities (running an OS deployment as 10 concurrent labs, etc) may occur at exactly the same time, on the same server. That may impact other labs on the same server.

    So one of the improvements we are making at next events, is to spread out multiple copies of the same lab over more servers. This will spread out the disk intensive task more, rather than occurring 10 times in-sync on the same Hyper-V server.

    BTW - Glad that you liked the dashboard display. That got some good comments <g>.


    Ronald Beekelaar

  • Impressive stuff, indeed. Now that the show is finished, can you send me one of your rack (with the built-in cheap servers of course, don't bother removing them) ?

    With all my thankx (shipping cost should stay low as I live in switzerland)

    Best greetings

  • Just curious what the issue was with the Labs on Monday?  I was there as well and the labs on Monday afternoon were unusable. Seemed like the issue got ironed out but wondering if you could elaborate on what happened?  Was that the disk IO you mentioned in the previous comments?  If a high powered san was used would that have alleviated the issue?

    Looking forwared to MMS 2011!

  • Hi Weston. Yes, of course we can let you know what happened on Monday afternoon.

    On Monday morning, everything was fine, but then during the day several labs became unresponsive, and some even failed to start. So if you were there on Monday afternoon, you experienced that.

    This was caused by a bug in our Web code. After we fixed that on Monday evening, all was working well until the end of the show.

    The technical explanation is as follows:

    The bug was a "logical mistake" in the way our Web site handled the load-balancing between multiple Hyper-V servers. When all 100 clients from an instructor-led lab session started at the exact same time, too many clients were assigned to the same Hyper-V server.

    This caused some labs to fail (out-of-memory), and not close down correctly. So during the day some running lab VMs where left behind, and that made the problem more noticeable in the afternoon.

    When we changed the load-balancing logic in the Web site to be a little bit smarter - all was fine the rest of the time. So this was NOT related to disk IO, or scaling, merely a logical mistake when handling 100 concurrent lab-start requests at the exact same time.

    > Looking forwared to MMS 2011!

    So are we! Hope to see you there!


    Ronald Beekelaar

    XB Velocity

  • I was at MMS 2009 and 2010 an was very impressed with the lab setup and performance this year.  And to add to previous comments, the dashboard was fantastic.  

    So, here is my question... how/where/when can I get a detailed descritpion of the setup for the lab?  I think this is something I could use in a much smaller scale.  I am a Senior Consult and part of what I do is training IT staff on new technology we are implementing.  It would be amazing to add something like this in a very small and portable setup.  And instead of using thin clients, utilize VPCs on their own hardware pushed with MDT2010 or something.  Anyway, you have really got my gears turning here, lol.

    Great Job, I am looking forward to hearing back from you.


    Frank Pinto

  • @Frank Pinto: there’s no public URL with a detailed description. Let’s take this offline and we can help you. Please click “email blog author” [find it under This Blog, left navigation bar], and share your email address and cut/paste the question. Either Jeff or Ronald will email you back.