A blog by Jose Barreto, a member of the File Server team at Microsoft.
All messages posted to this blog are provided "AS IS" with no warranties, and confer no rights.
Information on unreleased products are subject to change without notice.
Dates related to unreleased products are estimates and are subject to change without notice.
The content of this site are personal opinions and might not represent the Microsoft Corporation view.
The information contained in this blog represents my view on the issues discussed as of the date of publication.
You should not consider older, out-of-date posts to reflect my current thoughts and opinions.
© Copyright 2004-2012 by Jose Barreto. All rights reserved.
Follow @josebarreto on Twitter for updates on new blog posts.
Windows Server 2008’s Hyper-V has been in public beta for a while now and lots of people have been experimenting with it. One aspect that I am focusing on is storage for those virtualized environments and more specifically the options related to SAN storage.
Before we start, I wanted to define some terms commonly used in virtualization. We refer to the physical computer running the Hyper-V software as the parent partition or host, as opposed to the child partition or guest, which is the term used for virtual machine. You can, say, for instance, that the host must support hardware-assisted virtualization or that you can now run a 64-bit OS in the guest.
The other term used with Hyper-V is Integration Components. This is the additional software you run on the guest to better support Hyper-V. Windows Server 2008 already ships with Hyper-V Integration Components, but older operating systems will need to install them separately. In Virtual Server or Virtual PC, these were called “additions”.
Exposing storage to the host
A Hyper-V host is a server running Windows Server 2008 and it will support the many different storage options of that OS. This includes directly-attached storage (SATA, SAS) or SAN storage (FC, iSCSI). Once you expose the disks to the host, you can expose it to the guest in many different ways.
VHD or passthrough disk on the host
As with Virtual Server and Virtual PC, you can create a VHD file in one of the host’s volume and expose that as a virtual hard disk to the guest. This VHD functions simply as a set of blocks, stored as a regular file using the host OS file system (typically NTFS). There are a few different types of VHD, like fixed size or dynamically expanding. This hasn’t changed from previous versions. The maximum size of a VHD continues to be 2040 GB (8 GB short of 2 TB).
You can now expose a host disk to the guest without even putting a volume on it using a passthrough disk. Hyper-V will let you “bypass” the host’s file system and access a disk directly. This raw disk, which is not limited to 2040 GB in size, can be a physical HD on the host or a logical unit on a SAN. To make sure the host and the guest are not trying to use the disk at the same time, Hyper-V requires the disk to be in the offline state on the host. This is referred to as LUN passthrough, if the disk being exposed to the guest is a LUN on a SAN from the host perspective. With passthrough disks you will lose some nice, VHD-related features, like VHD snapshots, dynamically expanding VHDs and differencing VHDs.
IDE or SCSI on the guest
When you configure the guest’s virtual machine settings, you need to choose how to show the host disk (be it VHD file or passthrough disk) to the guest. The guest can see that disk either as a virtual ATA device on a virtual IDE controller or as a virtual SCSI disk device on a virtual SCSI controller. Note that you do not have to expose the device to the guest in the same way it is exposed to the host. For instance, a VHD file on a physical IDE disk on the host can be exposed as a virtual SCSI disk on the guest. A physical SAS disk on the host can be exposed as a virtual IDE disk on the guest.
The main decision criteria here should be the capabilities you are looking for on the guest. You can only have up to 4 virtual IDE disks on the guest (2 controllers with 2 disks each), but they are the only types of disk that the virtualized BIOS will boot from. You can have up to 256 virtual SCSI disks on the guest (4 controllers with 64 disks each), but you cannot boot from them and you will need an OS with Integration Components. Virtual IDE disks will perform at the same level of the virtual SCSI disks after you load the Integration Components in the OS, since they leverage the same optimizations.
You must use SCSI if you need to expose more than 4 virtual disks to your guest. You must use IDE if your guest needs to boot to that virtual disk or if there are no Integration Components in the guest OS. You can also use both IDE and SCSI with the same guest.
iSCSI directly to guests
One additional option is to expose disks directly to the guest OS (without ever exposing it to the host) by using iSCSI. All you need to do is load an iSCSI initiator in the guest OS (Windows Server 2008 already includes one) and configure your target correctly. Hyper-V’s virtual BIOS does not support booting to iSCSI directly, so you will still need to have at least one disk available to the guest as an IDE disk so you can boot to it. However, all your other disks can be iSCSI LUNs.
There are also third-party solutions that will that will allow a Hyper-V guest to boot from an iSCSI LUN exposed directly to the guest. You can check a product from EmBoot called WinBoot/i that does exactly that at http://www.emboot.com.
Moving disks between hosts
Another common usage scenario in virtualization is moving a virtual machine from one host to another. You will typically shut down the guest (or pause it), move the storage resources and then bring the VM up in the new host (or resume it).
The “move the storage” part is easier to imagine if you are using VHD files for guest disks. You simply copy the files from host to host. If you’re using physical disks (let’s say, SAS drives that are passthrough disks exposed as IDE disks to the guest), you can physically move the disk to another host. If this is a LUN on a SAN, you would need to reconfigure the SAN to mask the LUN to the old host and unmask it to the new host. You might want to use a technology called NPIV to use “virtual” WWNs for a set of LUNs, so you can move them between hosts without the need to reconfigure the SAN itself. This would be the equivalent of using multiple iSCSI targets for the same Hyper-V host and reconfiguring the targets to show up on the other host. If you use iSCSI directly exposed to the guest, those iSCSI data LUNs will just move with the guest, assuming the guest continues to have a network path to the iSCSI target and that you used one of the other methods to move the VM configuration and boot disk.
Windows Server 2008 is also a lot smarter about using LUNs on a SAN, so you might consider exposing LUNs to multiple Hyper-V hosts and onlining the LUNs as required, as long you don't access them simultaneosly from multiple hosts.
Keep in mind that, although I am talking about doing this manually, you will typically automate the process. Windows Server Failover Clustering and System Center Virtual Machine Manager (VMM) can make some of those things happens automatically. In some scenarios, the whole move can happen in just seconds (assuming you are pausing/resuming the VM and the disks are in a SAN). However, there is no option today with a robot to physically move disks from one host to another :-).
A few tables
Since there are lots of different choices and options, I put together a few tables describing the scenarios. They will help you verify the many options you have and what features are available in each scenario.
VHD on host volume
Passthrough disk on host
Directly to guest
DAS (SAS, SATA)
DAS or SAN on host, VHD or passthrough disk on host,exposed to guest as IDE
DAS or SAN on host, VHD or passthrough disk on host,exposed to guest as SCSI
not exposed to host,exposed to guest as iSCSI LUN
Guest boot from disk
Additional sw on guest
Integration Components (optional)
Guest sees disk as
Virtual HD ATA Device
Msft Virtual Disk SCSI Disk Device
MSFT Virtual HD SCSI Disk Device
Guest max disks
2 x 2 = 4 disks
4 x 64 = 256 disks
Not limited by Hyper-V
Guest hot add disk
Guest hw snap on SAN
1IDE VHD Local
2SCSI VHD Local
3IDE Passthrough Local
4SCSI Passthrough Local
5IDE VHD Remote
6SCSI VHD Remote
7IDE Passthrough Remote
8SCSI Passthrough Remote
Exposed to host as
VHD on NTFS
Exposed to guest as
Guest driver is “synthetic”
Guest max disk size
~2 TB (c)
Limit imposed by guest (d)
Limit imposed by guest (d) (e)
Limit imposed by guest (d) (e)
Hyper-V VHD snapshots
Dynamically expanding VHD
SCSI-3 PR for guests on two hosts (WSFC)
Guest hardware snapshot on SAN
P2V migration without moving SAN data
VM migration without moving SAN data
(a) Works as legacy IDE but will perform better if Integration Components are present.(b) Works as legacy network but will perform better if Integration Components are present.(c) Hyper-V maximum VHD size is 2040 GB (8 GB short of 2 TB).(d) Not limited by Hyper-V. NTFS maximum volume size is 256 TB. (e) Microsoft iSCSI Software Target maximum VHD size is 16 TB.(f) Requires SAN reconfiguration or NPIV support, unless using a failover cluster.(g) For data volumes only (cannot be used for boot/system disks).(h) Requires SAN reconfiguration or NPIV support, unless using a failover cluster. All VHDs on the same LUN must be moved together.(i) Requires third-party product like WinBoot/i from EmBoot.(j) Not limited by Hyper-V.
Screenshot of settings for scenario 2 in table 3 (VHD exposed as SCSI):
Screenshot of settings for scenario 8 in table 3 (iSCSI LUN passthrough exposed as IDE, which your guest can boot from):
Updated on 03/30/2008 to reflect the change to 256 (4x64) virtual SCSI disks with the release of the Hyper-V RC.Updated on 03/06/2008 with additional details on iSCSI boot on guest. Check details at http://blogs.technet.com/josebda/archive/2008/03/06/more-on-storage-options-for-windows-server-2008-s-hyper-v.aspx.Updated on 04/27/2008 to include titles for scenarios on Table 3 as suggested by Jeff Woolsey.Updated on 05/09/2008 to include information about VHD snapshots, dynamically expanding VHDs and differencing VHDs.
PingBack from http://blog.windowsvirtualization.com/virtualization/storage-options-for-hyper-v
As always, Andrew beat me on this one too. Jose Barreto, he's technical evangelist in the storage team,
As always, Andrew beat me on this one too. Jose Barreto, he's technical evangelist in the storage
Is this statement correct?
"You can only have up to 4 virtual IDE disks on the guest (2 controllers with 2 disks each), but they are the only types of disk that the virtualized BIOS will boot from."
I currently have a pile of Virtual Server VMs that all have virtual SCSI adapters and virtual SCSI drives attached and they all boot fine.
Actually you can boot a Hyper-V virtual machine off of iSCSI by assigning the iSCSI LUN to the parent partition and then directly attaching it to the virtual machine.
Hyper-V does indeed support iSCSI boot, both for the hypervisor-on-2008 and for its guest VMs. We've tested both successfully.
For our tests on guest VMs, we first added a legacy NIC (DEC 21140). We then installed our winBoot/i software as per our normal procedure, copied the VM's contents up to an iSCSI lun, set the BIOS to boot from network and it worked.
Booting a Hyper-V-on-2008 was a bit trickier - the Hyper-V installation adds bindings to an existing NIC that can get in the way of iSCSI boot. As per our past experience with this on Virtual Server 2005 R2 (see http://188.8.131.52/forum/forums/thread-view.asp?tid=203&posts=1)
the trick is to use a 2nd NIC bound to Hyper-V and then set the boot NIC unbound from Hyper-V. We then installed our winBoot/i v2.5 beta client on this, SystemCopy up to iSCSI SAN, and we then booted successfully from iSCSI with VMs still working under Hyper-V.
Screenshots and documentation details will be added to our website support forums within a few days.
As I mentioned in a previous blog post, you can expose storage to a Hyper-V guest in many different ways.
I will be working at the Microsoft booth in the Storage Network World next week in Orlando, Florida.
I spent last week in Orlando, Florida, working at the Microsoft booth in the Storage Networking World.
Excelente artículo de mi compañero Jose Barreto en relación a la relación entre Hyper-V
Jose pinged a great email across a discussion list a few days back, with this diagram in, which really
My current project involves being the only dedicated technical resource on the Virtualization RDP Team.
There are many ways to implement Windows Server Failover Clustering with Hyper-V. I could actually find
Hyper-V is here! As you can confirm on the press release linked below, the final release of Windows Server