Now that Windows Vista has reached the SP1 milestone, and, Windows Server 2008 has released, it's important to update your virtualisation applications to run in the most optimal way on the platforms. Now, I know what you're going to say - why would I use Virtual Server 2005 R2 SP1 on Windows Server 2008 when I have Hyper-V? Well, remember, Hyper-V is x64 only, and some people may want to make use of their older x86 platforms to run a few Virtual Machines on top of.
Anyway, if we start with Virtual PC 2007 SP1:
Virtual PC 2007 SP1
You can download the update to Virtual PC 2007 here: http://www.microsoft.com/downloads/details.aspx?FamilyID=28c97d22-6eb8-4a09-a7f7-f6c7a1f000b5&displaylang=en
Not only will this upgrade the Virtual PC application, but it will also provide a new set of VM Additions, so make sure you enable them inside your VMs.
You can pick up the release notes for this upgrade, here: http://www.microsoft.com/downloads/details.aspx?FamilyID=9f3d3eb5-5e03-4712-999c-e96f91bdf128&displaylang=en
Virtual Server 2005 R2 SP1
You can download the update for Virtual Server 2005 R2 SP1 here: http://www.microsoft.com/downloads/details.aspx?FamilyID=a79bcf9b-59f7-480b-a4b8-fb56f42e3348&displaylang=en
You can get all the details about this particular update here: http://support.microsoft.com/kb/948515. The same thing applies here, with regards to the install updating the VM Additions within the VMs - this is something you'll have to manually initiate.
Useful blog post over on the Virtualisation blog, which covers unattended installs of WS2008 and Hyper-V, so you're ready to rock as soon as the install is complete. Check it out here: http://blogs.technet.com/virtualization/archive/2008/05/07/unattended-installation-of-windows-server-2008-with-hyper-v-rc0.aspx
This is a nice follow-up to the post I wrote the other day, around applying a Ghost/ImageX image to a machine where you need to apply the Bcdedit /set hypervisorlaunchtype auto command upon completion of imaging. Using an unattended install, this shouldn't be required.
You'll notice, from the image above, that Windows Server 2008 RTM is already listed as SP1. Is this a good thing? I think so. Iain McDonald, of the Windows Server team, agrees, and explains how the Windows Server / Windows Client relationship is better now than it's ever been...
Emma has just emailed this information through, and it's a useful addition to any Partner's arsenal, big or small. For those of you who aren't familiar...
What is the Sales Toolkit?
The Microsoft Sales Toolkit is your partners’ source for driving sales. It is the licensing and sales 'product handbook' that helps partners generate up-sell and cross-sell opportunities, giving you and your Partners information on the full portfolio of Microsoft products and services. You can find information on what the product is, how to sell it, relevant SKU numbers and FAQs.
This edition runs from June to December 2008 and has been updated to include the new 2008 products. This edition has been automatically sent to Partners who have subscribed online. If you require additional copies, they can be ordered from http://www.microsoft.com/uk/gearup - there is also an electronic version available for download.
Useful whitepaper on setting up, and testing Hyper-V HA and Quick Migration.
Hyper-V is a very cool technology. It's also a very complex technology, with a lot going on under the covers. When I deliver sessions on Hyper-V, and I talk about the architecture, it's a difficult concept to discuss quickly, hence, hopefully, this post will go some of the way to trying to explain the Hyper-V architecture in an easy to remember, and digest, way!
So, let's start with the Microkernalised approach to virtualisation. The diagram on the left shows the Hyper-V hypervisor. What is the hypervisor? Well, its a layer of code that runs natively on the hardware - it's the lowest thing on your system, and it's responsible for owning the hardware, and doing hardware and resource partitioning. The main difference between this and a monolithic hypervisor such as VMware's ESX, is the location of the drivers and location of some core operating system components. As you can see, in Hyper-V, the drivers exist in the actual partitions themselves, rather than actually in the hypervisor. This means that we can massively reduce the size of the hypervisor. In fact, Hyper-V's hypervisor is around 600kb in size. That is a small bit of code. Interestingly enough, this is a small bit of Microsoft code.
That's right. It's 100% Microsoft - there's no 3rd Party code gone into this platform. There are no operating systems components in the hypervisor. It's just a thin slither of reliable, and secure code. Now, certain people in the industry have questioned Microsoft's driver model, based on previous experiences with Windows drivers, and the fact that anyone could pretty much knock up a driver and it would be 'accepted' by Windows. I'd have to disagree with x64 drivers and Windows Server 2008. Microsoft have worked incredibly hard to provide mechanisms and structured programs for device manufacturers to produce top quality drivers, and sure, some will slip through the net, but as the mechanism evolves, this will be reduced and reduced. When it comes to x64 drivers, they all have to be signed, which means they have to go through even more testing to ensure they are of a good quality, and we've made loads of guidelines available for driver-writers (if that's what they are called!) here: http://www.microsoft.com/whdc/driver/64bitguide.mspx so it's not like they have to find their own way around the topic. Anyway, I digress.
You may or may not have had the chance to read a whitepaper about 'Blue Pill' which is a hypervisor rootkit security whitepaper. It's naming is in reference to the Matrix, where in the movie, if you take the blue pill, you exist in the computerised world, and continue to be controlled by the computer and you have no idea you are being controlled...The concept here is, if someone was to take control of your hypervisor, with it being the lowest element on your system, the elements above the hypervisor would not know they were being controlled, and would find it very difficult to detect. With this in mind, keeping core operating systems bits out of the hypervisor, and keeping it trim, clean and secure, also brings a strong level of reliability.
So, that's enough about the hypervisor as such, but lets look at the rest of the architecture:
So, when it comes to the hardware, it needs to be hardware with Intel-VT or AMD-V hardware assisted virtualisation technology, it needs to be x64 (not Itanium), you also need to enable the 'No Execute Bit' in the BIOS too. This is used to create a more secure environment. The diagram above is an installation before enabling Hyper-V. What you see on the left hand side of the diagram could be a full GUI version of Windows Server 2008, or it could be a Server Core installation. Advantages of Server Core include smaller footprint, lower attack surface, and reduced patching to name but a few. When we enable Hyper-V:
The hypervisor now slides under the OS, and this now becomes the lowest part of your system. Remember, it's a very thin, secure hypervisor. It's like a thin layer of veneer on the hardware. Enabling Hyper-V also brings the bits in the purple boxes:
The VM Worker Processes are individual processes spawned for each virtual machine, and are designed to handle things like emulation...
As you can see, I've added 2 virtual machines (child partitions) here to explain VM Worker Processes a little further. You can see that the 2nd VM is a non-hypervisor aware OS, which means that the hardware it sees, needs to be emulated. This is the old IO model, used in Virtual Server and Virtual PC. Why is emulation good? It's good because it emulates known hardware. Known hardware such as 440BX Chipset Motherboard, a DEC Ethernet Controller card, and more. These examples are pieces of hardware that are very standard in the industry, and nearly every operating system under the sun understands. Microsoft or otherwise! Downfall of emulation is the cost in terms of an IO perspective. If there is an app running in the emulated VM, say, Excel, and it's wanted to save a 100kb file down to the hard drive. What happens is, is that Excel tries to write down into Kernal mode - it's not aware that's its running on a hypervisor and thinks it has direct access to the hardware. So, what we have to do, is do a trap to grab that request, bring it over to the Parent partition, into Kernel Mode, up into User Mode, into the VM Worker Processes, and that's where the emulation happens. Now, to give a crude estimate, but for a 100kb write, it takes about 80 traverses from the User Mode on the Child Partition, down into Kernel Mode, up into Kernel Mode on the Parent, and up into User Mode and the VM Worker Processes, and back again to make that 100kb write. So, inevitably, there's a performance hit with this type of virtualisation, but, on the flip side, you have a broad range of operating system support, such as below:
If you now look at the other Child Partition, (in this case, with Windows Server 2003 / 2008 listed as the OS), this VM is not using emulation. It's using the VSP, VSC and VMBus architecture. This is the Virtual Service Provider, Virtual Service Clients, and they communicate over the high speed 'In Memory Bus'; VMBus, that we've created. It's 100% in memory, and not physically tangible in any way. It's been designed for IO traffic. So, effectively, the VM is writing directly to a driver (which is a VSC), and this information transfers directly to the VSP over VMBus (jumps back and forth a few times), and then onto the hardware below.
So, to expand on the VSP/VSC relationship, as you can see, we have the Parent partition on the left, and the hypervisor aware Child Partition on the right, split into User (top) and Kernel (bottom) modes. The Orange/Yellowy colour you can see, are all Hyper-V related bits. So, on the right hand side, the application tries to do a write, via the Windows File System, Volume, Partition and on to the Disk. If you're using emulation, you'd keep going down, across and up to the VM Worker Processes (not using VMBus) and it would go back and forth, back and forth, before it makes it's way to the StorPort MiniPort driver, and down to the hardware.
Now, with VSP/VSC, you work your way down, and it hits the VSC, goes across the VMBus to the VSP, still in Kernel mode here - there has been no need to go into User mode to handle this. Every time you go between User/Kernel mode, you take a performance hit, and because you are going back and forth, quite a few times in emulation, you take quite a few performance hits. This doesn't happen in the VSP/VSC world. Once the data is over at the VSP, it writes directly to disk, and then down to the hardware. Very fast and very performant. So, you get a very performant guest operating system, providing it knows it's running on the hypervisor. This reduces down the number of operating system choices by quite a way - right now, Windows Server 2008 and 2003 (with SP2), XP SP3 and Vista SP1 too.
What we're also hearing from customers is that you'd like a single platform to virtualise not only Microsoft OS's, but also Linux OS's, and that's where the partnerships with companies like Citrix, and Novell come in.
So, as you can see, we have 3rd type of Child Partition, namely, the Xen-Enabled Linux VM. This could be Novell SUSE SLES 10 SP1 as an example. So, this VM is running the Linux Kernel, and we've worked closely with the relevant organisations' to write the relevant VSC's and Hypercall Adaptors, that ensure that calls for hardware made by Xen-Enabled Linux VMs are handled in the most optimal way, rather than pushing them down the emulation route, as described earlier. This means certain Xen-Enabled Linux VMs really will be first class citizens on the Hyper-V platform. I'm sure, as time goes by, you'll find even more OS's come along with the ability to take advantage of the VSP/VSC architecture, as it really is the way to go :-)
That's about it - hopefully that's helped you understand the architecture, I know it's helped me by getting it off my chest!
Great post from Jose in the US, who discusses Storage Explorer, a great new tool in Windows Server 2008 to help you understand how your server SAN storage is configured.
Read the full post here.
They may be a competitor, but there's no denying they've driven the Virtualisation industry forward massively over the last 10 years. Virtualization.info has a brief, but interesting post talking about the last ten years. It's hard to believe that in just 10 years, VMware has become one of the biggest software companies in the world. Credit where it's due!
Personally, I think the next 10 years will be even more interesting, now that more people have joined the party....
If you're using Hyper-V, and you've gone into the settings for a Virtual Machine, under the CPU section, you may have seen this:
What does this mean? How do you 'limit processor functionality' and what exactly is being 'limited'?
Well, these blog posts aim to explain what it does actually mean!
It's a good read, and Natasha has gone into quite a bit of detail, so if you're not of a technical nature, I'd probably jump to the last couple of paragraphs of Part 2 for a summary!
How do you deploy Windows Server? With Windows Server 2008, I typically install from the DVD, then, once complete, I connect to Windows Update, get the latest patches, then, I'll typically enable the Hyper-V role in Server Manager, reboot the machine, and then, and only then, I'm good to go with my 'Hyper-V Enabled' Windows Server 2008 machine.
OK, so that process, typically, would have taken about 30-45 minutes. What I could then do, before creating any Virtual Machines, is Sysprep the Operating System, then use a technology such as ImageX, or Ghost, to capture an image of my 'Hyper-V Enabled' Windows Server 2008 OS. This image could then be deployed onto another piece of hardware, and in theory, after applying that image, when I boot up Windows Server 2008, I should just be able to start creating Virtual Machines, as Hyper-V was already enabled in the image, right?
Doesn't quite work like that. Think about it, when you first install Windows Server 2008 (onto the hardware), the boot configuration datastore is created and updated etc, and then the OS boots for the first time. When you enable Hyper-V, you are placing a hypervisor between the OS and the hardware, so the boot configuration datastore is updated accordingly, so that when you boot this OS subsequently, the hypervisor starts automatically, and you can happily create virtual machines.
If you are applying a 'Hyper-V Enabled' image, you are laying down the OS, and the hypervisor layer in one go, so you have to execute a short script from the command line (or include it in your unattend file as a post installation script) to update the boot configuration datastore. The script is:
Bcdedit /set hypervisorlaunchtype auto
That's it! So, when you subsequently boot that OS, the hypervisor is correctly set up to run (providing the hardware you have applied the image to has Intel-VT or AMD-V, and DEP/Disable Execute Bit enabled).
I'm sure this will be listed in the final documentation, but at least now you know! It caused much head scratching with myself and a customer a few days back!