I thought I’d start this third and final part with ‘what is Microsoft’s strategy around virtualisation’? Our strategy has been developing in this area and I think we’ve now got a pretty clear position:
“A complete set of virtualisation products, from the data center to the desktop. All assets (both virtual and physical) managed from a single platform”.
We (Microsoft) believe that we have a complete solution, when it comes to virtualisation; we have made investments in the technology, the licensing, in interoperability and in support. Let me cover off the last three first and then spend more time on the technology.
Licensing used to be a can of worms, then we made it easier, then we introduced virtualisation! Every version of Windows that you run needs to be licensed, whether that’s running on a physical server or a virtual one. We have made a few changes to make this a bit easier, they are:
Just a small note here (for completeness and honesty) – technically you can run another virtualisation solution on a server and avail of the last two licensing deals (you just need to have assigned the license to that server).
In the context of virtualisation, interoperability has been well documented recently. Be it the Novell agreement, the XenSource agreement, moving the virtual hard disk (VHD) license under the Open Specification Promise, and certainly running and supporting Linux guest virtual machines on Virtual Server. But these are just the higher-profile examples. We’ve also done more subtle items to enable and support interoperability. For example, the APIs for Virtual Server 2005 have been available publicly since day 1 on MSDN. And much of the preliminary details on Windows Server virtualisation (part of Longhorn) APIs were shared at WinHEC last year, in the included documentation shared with each attendee. And like other Windows APIs, we plan to publish these publicly at beta.
Thirdly, support. I believe Microsoft has the best support organisation out there – mind you, you get what you pay for. All of our ‘self help’ content is on the web, we offer free support for security and internet explorer related issues, you can give us your credit card details and we’ll help you with anything (and if it turns out to be our fault, we won’t charge you). Give us a bit more money and we’ll give you 24x7 support and get the guy that wrote the piece of code causing the error, out of his bed to fix your issue. Give us a bit more and you can have engineers on site, technical account managers representing you inside Microsoft, training, design reviews, you name it - you do get what you pay for. In the context of Virtualisation, support just means that you have one place to ‘point your finger at’ (one arse to kick) – if it’s a Microsoft product, we’ll fix it (we own the entire stack, so we can troubleshoot the entire stack). It is a bit better than that – if it’s one of the supported, non-Windows operating systems and it’s running in Virtual Server, we’ll even fix that.
Now onto the technology – apologies if this comes across as a big list (but we do have a lot to offer).
Virtual Server has to be first; as it is the most cost-effective server virtualisation technology available (it’s free, as is Virtual PC). Virtual Server increases hardware utilisation and enables organizations to rapidly configure and deploy new servers. Virtual Server virtualises the hardware of a complete physical computer in software and lets you install and run multiple virtual machines. Virtual Server has only been out since 2005, so we’ve been playing ‘catch up’ to a degree (it’s been around for a lot longer though – I used it during the Windows Server 2003 launch event to run all my demos). We are currently at Virtual Server 2005 R2 SP1 (well, nearly – SP1 is in its final beta) and can pretty much do anything you’d ever need a virtualisation platform to do (including physical to virtual migrations). The R2 release introduced 64-bit support for the host machine (which simply put, means more memory, which equates to more guest machines and more performance). We also released support for running Virtual Server on a cluster, which offers high availability for any virtual machines you run, by letting them fail over to another physical host in the event of planned or unplanned downtime. SP1 adds Volume Shadow Copy support (snapshot backups), support for hardware assisted virtualisation (Intel VT and AMD), Vista as a guest, the ability to mount a VHD file (for offline edits), bigger VHDs, and a few other features which escape me. It’s possible to completely manage Virtual Server by scripting against the APIs of its Component Object Model, or you can use the web-based administration console that it ships with. A quick note here – you do not need to have IIS installed to run Virtual Server (you only need it on the machine you are going to run the admin console on).
Management is next; we want to make Windows the most manageable platform out there, by enabling customers to manage both physical and virtual environments using the same tools, knowledge and skills. No other virtualisation platform provider is delivering this. Today customers can use Microsoft Operations Manager (MOM) and the management pack for Virtual Server. It knows what is running on physical servers and what is on virtual and knows to treat them differently. MOM includes all the best practice advice and guidance within itself and treats one host server as a single device (even though it could be running multiple virtual machines).
Next up is Virtual PC 2007. Virtual PC lets you create separate virtual machines on your Windows desktop, each of which virtualises the hardware of a complete physical computer. You can run multiple operating systems at once on a single physical computer and switch between them as easily as switching applications. Virtual PC is the ideal solution for tech support, legacy application support, training, or just for consolidating physical computers.
Microsoft SoftGrid Application Virtualisation is next on my list, is the only virtualisation solution on the market to deliver applications that are never installed, yet securely follow users anywhere, on demand. SoftGrid changes applications into network services that no longer need to be installed, which lets you keep your PCs in a known, locked down state whilst still letting your users get on with their jobs. Application virtualization provides a finer grained solution compared to virtual machines, and it’s really more suited for the enterprise desktop environments.
I guess the next to mention is for Windows Server. Terminal Services is an integrated part of all Windows Server versions and it allows you to remote the video, keyboard and mouse over the network to your desktop. I know terminal services isn’t really what you’d call a virtualisation technology – but it is (your desktop hasn’t really got eight processors – has it?).
There’s even virtualisation in Windows Vista – if your application wants to write to the windows or program files folders or wants to write to the registry, Vista won’t let you. Instead of just failing, Vista virtualises the file system and the registry and your application just works (you actually write into your user profile, out of harm’s way, keeping Vista stable and un-compromised).
We’re extending the virtual infrastructure management capabilities with System Center Virtual Machine Manager soon, which will allow you to increase physical server utilization, centralise management of virtual machine infrastructure and quickly provision new virtual machines. And it’s fully integrated with the System Center product family so you can leverage your existing skill sets. Even though SCVMM is currently in beta, I would recommend you look at it; especially it’s physical to virtual feature and its self service portal.
Longhorn is due out later this year and will include Windows Server Virtualisation, which will introduce things like live migration, support for up to eight virtual processors (32 or 64-bit), and hot add of resources such as disk, networking, memory and CPU (you’ll need an operating system that knows what to do if it suddenly sees more memory or CPUs – Longhorn?). Windows Server Virtualisation is a hypervisor based virtualisation solution that runs at ring -1 (below the kernel) and is very thin.
So, by the end of this year, we’ll be able to do this:
I need a new server to run my line of business application. I go to the self service portal on my intranet and request a server. Within minutes I am running a virtual server (with potentially up to eight 64-bit processors and 32GB of memory). System Center is looking after my server and maintaining the service level I need. It’s being kept up-to-date, it’s being backed up and it’s being monitored. If System Center deems that the best fix for a problem, is to add more virtual hardware, it will. My virtual server is running on a physical server somewhere, I don’t need to know where, System Center made that decision based on its knowledge of my datacenter and its knowledge of my workload – if the physical host needs to be shut down or it has an error, my virtual machine will continue running on another box. I’ll receive regular reports in my inbox, detailing my server’s health and performance (I’ll probably get a bill too – based on how much resource I used).
This might sound a bit farfetched – but it’s real and all I’m waiting on is a couple of products to come out of beta. If I can’t wait, I can probably do 80 percent of it now anyway - certainly all I’d ever want (or need) to do with virtualisation just now.
I hope you enjoyed reading this series of articles as much as I did writing them.
Dave Northey has written a great 3 part article on Virtualization: Virtualisation (1 of 3) - What is