Putting it all together: There are several key scenarios that are now possible with server virtualization technology in the volume server space. I touched on a few of them, but drill down into the next level of detail:

 

More Efficient and Rapid Development and Test

Moving development and test systems to a virtual environment will allow you consolidate hardware, quickly restore the system to a known state and spin down machines that aren’t actively in use. The last part is a growing concern among customers who are facing challenges with power and cooling in the data center.

 

Faster Deployment of New Servers and Quicker Rollback of Changes

As I mentioned, customers that are well into production deployments of server virtualization tell me that they are able to provision new servers in just days for their business units, which is down from several weeks to a couple months. One customer shared with me that a few of the applications they support, frequently break after a patch. With virtualization, they are now able to roll back any patches or changes in under 45 minutes. This means that they can deploy, test and rollback a change in one maintenance window, rather than having to wait for the next window to roll back if there’s a problem.

 

More Flexible Use of Hardware Resources and Disaster Recovery

With new support for iSCSI-based clusters in Virtual Server 2005 R2 and the virtualization use right in Windows Server 2003 R2 Enterprise Edition, you can now have a cluster of servers that fail over virtual instances of the OS, both proactively and reactively. You can fail over one virtual machine (VM) if a particular server in the cluster is starting to get overloaded and balance out the load. Or, have all VMs fail over to another physical machine in case of a hardware or other failure on that server. You’ve probably seen software that allows you to pro-actively move one VM to another virtual server. However, the ability to use one OS technology to move workloads both proactively and reactively is incredibly powerful.

 

There’s one thing I hear from customers who have been using server virtualization for some time: They are just as concerned about server sprawl today as they were in the past. Virtualization can be a great tool to reduce the number of hardware servers, but because it makes it easier to deploy OS instances, you have to be careful that you don’t end up with more virtual servers than you need. As I mentioned before, Microsoft is making investments in both virtualization technology, as well as enabling customers to get more work done with a single OS instance. Working in partnership with Intel and AMD, Microsoft is building a platform-layer virtualization solution, based on the virtualization extensions being developed by both partners and put into chips starting this year. This new virtualization solution will be available in the Longhorn Server timeframe and provide tremendous improvements to virtualization performance. A platform layer virtualization solution makes it possible to deploy virtual servers as the default installation option, which means it’s time to party like it’s 1974!

 

Providing a common set of management tools that manage OS and application and the virtualization layer itself is a key area where Microsoft is investing today. Microsoft has a vision for Self-Managing Dynamic Systems, which is a marriage of the flexibility inherent in virtualization and the automation of powerful management tools across the stack. The great news is that you can get started today using virtualization in production with existing management tools. When you are ready to take the next step toward dynamic systems, Windows Server 2003 R2, Enterprise Edition and Virtual Server 2005 R2 with support for iSCSI-based clusters give your organization the ability to start getting more out of your virtualization solution. Greater reliability, improved flexibility and better use of hardware resources are all possible today.