In Part 1 I covered the physical hardware setup around the network, storage and compute of the Home Hyper-V test lab.

In Part 2, I'll cover the Hyper-V and cluster configuration, where I run the VMs from and why. And the optimisation of Read performance using Differential disks and the CSV block cache.

To expand on Part 1, I have four physical machines in the test lab.

  • 2 x Hyper-V Cluster hosts running Windows Server 2012 Hyper-V (Server Core)
  • 1 x Windows Server 2012 box running as an iSCSI target to provide shared storage to the Cluster
  • 1 x Gaming rig running Windows 8 Pro

As briefly mentioned in part 1. The Gaming rig is a home system. So it's not joined to the test lab domain. This makes it easy to work that system into a home group network, so stuff can be shared with the Home Theatre PC and the 10 thousand tablets I have in the house.

The great thing about Windows 8 Pro is Hyper-V client. This allows me to run some VMs that are independent from the cluster. There is no sacrifice VM performance because client Hyper-V is a Type 1 bare-metal hypervisor just like Windows Server Hyper-V.

I have three VMs running on the Gaming Rig. The test lab Domain Controller, an MDT box and an RDS Server. With the hardware I used (Itemised in Part 1), I was able to bare-metal build the two Hyper-V Cluster hosts over the network using MDT, using a standard server deployment task sequence. I did not have to worry about and additional driver injection, everything just worked out-of-box. For those not familiar with MDT (Microsoft Deployment Toolkit), it is simply the best way of accelerating Windows Image deployment. If you need to get image deployment happening quickly, MDT is the way to do it.

http://technet.microsoft.com/en-us/solutionaccelerators/dd407791.aspx

The new hardware will hit the MDT box via Pre-Execute Environment (PXE). This will be require the WDS and DHCP server roles to be installed on the MDT box.

I won't go into the step by step detail of how to set up the deployment share and importing Operating Systems. This is very well covered in the MDT documentation, the 'Quick Start Guide for Lite Touch Installation' is a good place to start.

The task sequence I show here is as about out of box as it gets. Windows has all the drivers out of box, so I did not have to worry about any additional drivers. I have set the Task Sequence to detect the model type and if being deployed to my physical hardware, it will run the additional of tasks to set-up Hyper-V.

Below you can see the server build Task Sequence. Before you say 'that looks like a lot of work', all of the steps except for the activities under the 'Hyper-V Cluster Node' group, are generated automatically by MDT.

The Options tab is where you set the logic for the group of tasks to only run if certain hardware is detected.

I didn't have to do any scripting here, the 'Product' variable is populated by the ZTIGather script, which runs as part of an MDT task sequence by default.

The custom actions were also quite easy, the first two actions enable the Hyper-V role and install the Failover Cluster feature (you could do this as a single step, but I like to keep things modular).

The next two steps set my MDSN MAK product key and activate Windows. This works because the lab has access to the internet.

You can set the product key when setting up the Task Sequence, but the other way to do it is using the slmgr.vbs script in windows. The /ipk switch sets the product key and the /ato switch activates Windows.

The last step it to import the MDT boot media into WDS so it can answer the PXE request from your bare metal servers.

Launch the Windows Deployment Services manager. Navigate to 'Boot Images' item and right click. Select 'Add Boot Image' and navigate to deployment share directory\Boot and select the LiteTouchPE_x64.wim file. Once imported, WDS will serve the MDT boot image to any client it answers a PXE request from.

I was able to kick both builds off a same time and in 15 minutes I had two fully build Hyper-V Hosts ready for the cluster configuration.

iSCSI SAN

A Hyper-V cluster requires common storage, and that's where the server running as an iSCSI target comes in. This is just stupid easy to set up. I have Windows Server 2012 (Full UI) installed and simply allocated each drive (SSD and Spindle) to each LUN.

The following is a great blog on the setup and use of the iSCSI Target in Windows Server 2012.

As the blog covers things in great detail, I won't go into the setup to much here. As you can see, I have two LUNs presented (one fast SSD and the other slower Spindle). Plus another LUN provided for the cluster Witness disk.

 

From there I'm using the built in iSCSI initiator software on the Hyper-V host to make the connection from the Hyper-V hosts.

 

When the connection is made, the LUNs created on the 2012 iSCSI server come in as normal disks.

 

 

Cluster Configuration

 

While it's true I could probably get better capacity by putting the same amount of money into a single machine, I gain a good degree of flexibility by having a cluster. Zero down-time fabric patching is just one advantage (I'll cover this in a latter blog post).

Clusters in Windows Server 2012 are not that hard to understand. The most complex things in a production design are the networking and the storage.

I've covered the storage above. The Network setup is by no means to best practise. I am dedicating a single 1Gb NIC to iSCSI and some cluster functions, so the second NIC will be used for just about everything else.

The following table illustrates the networks needed by a Hyper-V cluster and how I've applied them.

Network

Description

Allocation

Host Management

Network used by the Hyper-V host to talk to external services (AD, VMM, Management etc.)

NIC1

VM Network

Attached to a virtual switch, the network that all VMs will talk on.

NIC1

Live Migration

Network used for Live Migration traffic.

http://technet.microsoft.com/en-us/library/hh831435.aspx

NIC1

CSV redirect network

http://technet.microsoft.com/en-us/library/ff182358(v=WS.10).aspx#BKMK_redirected

NIC2

Secondary Cluster network

Second network path for cluster communications

NIC2

iSCSI Storage

Path for iSCSI storage

NIC2

 

IP Addressing

Just for an example and so you get the idea, I've included the IP addressing scheme I've used.

Server Name

NIC

IP Address

HyperV01

Host/Live Migration

10.100.0.14

 

CSV Redirect/Secondary Cluster/iSCSI

192.168.0.2

HyperV02

Host/Live Migration

10.100.0.15

 

CSV Redirect/Secondary Cluster/iSCSI

192.168.0.3

iSCSISAN

Host

10.100.0.3

 

iSCSI1

192.168.0.1

 

ISCS2

192.168.0.4

 

Here is the NICs look in Windows in the Hyper-V Server.

Hang on, I've got two NICs and three are displaying here – what gives? Well, when you create a virtual swtich in Hyper-V and bind it to a physical adaptor (that will be the adaptor that transmits and recieves the VM traffic), it makes that adaptor unusable to the host directly. As you can see from the screenshot, it unbinds just about everything execpt the virtual switch.

On a server, this would be fine as I would have this NIC (or teamed NIC) dedicated to VM traffic. But in my home lab, I need to use that NIC for other stuff.

Setting the VM Network

Open the Hyper-V MMC, then select the Virtual Switch Manager. Create a new virtual switch (I've just called mine 'External') and pick the adaptor you wish to bind it to.

Notice there is a check box that says 'Allow management operating system to share this network adaptor'. By checking that, the Hyper-V host will create a new logical adaptor the host can use for other stuff, and that's the third NIC displayed in the screenshot above.

I'm going to side track for a second because I want to spell this out. Because I can't tell you how many times I've seen people get this wrong or misunderstand how this works.

Hyper-V Host before the vSwitch is created

After an external vSwitch is created

If the enable host management on the switch is enabled

There is an option just below this to set a VLAN tag. If you do this, you are NOT, I repeat NOT, VLAN tagging the virtual switch. You are tagging the new logical NIC that was just created in the host.

If a VLAN Tag is set when you create the vSwitch

If you're after the visio stencils I used, they can be found here.

You may have noticed at the start of the blog I stated I was using Server Core on the Hyper-V Servers, but all the screen shots have been from the full UI version. In Windows Server 2012 you can switch between the full UI and Server Core very easily. Here is a good blog on the difference between the three options on Windows Server 2012: Server Core, Minimal Server Interface and the Full UI.

You can switch between Server Core and Full UI using the following PowerShell Commands:

To go from Server Core to full UI.

If you installed the Server Core SKU, you will not have the binaries to run the full UI and you will need to run the following:

Dism /online /enable-feature /featurename: ServerCore-FullServer /source:"path to source"\sources\sxs

Then to enable the full UI:

Add-WindowsFeature Server-GUI-Shell

To go from full UI to Server Core:

Remove-WindowsFeature Server-gui-mgmt-infra

Create the Cluster

This process is fairly easy, you need to make sure the failover cluster feature is installed on all nodes. Then launch the failover cluster manager and select 'Create Cluster'.

Step through the create cluster wizard. The first task is to select the nodes you wish to be in the cluster (you can browse by AD computer object). You'll then be asked to validate the cluster. While this is not a production environment, you don't technically need to run it, but I would still recommend it. The Cluster validation process is the best way of finding issues that will prevent the cluster from working. After the validation, enter a name and IP address for the cluster. The Setup Cluster wizard takes care of the rest.

Setting the Quorum configuration

From the Top level of cluster failover manager, select 'More Actions' in the Actions section.

From there select 'Configure Cluster Quorum Settings'. One of the great new features of Windows Server 2012 is that it can dynamically set the quorum mode. In a Hyper-V cluster, you can have several quorum modes, the most common and node majority (just uses the hosts and does not use a witness disk) or node and disk majority (which uses the hosts and a witness disk). As long as you have a disk available to be the witness (as was configured earlier) the cluster will dynamically set the best quorum mode, based on the number of active nodes in the cluster.

How to set the LM network

Now we need to tell the cluster which network we want to use for which function. Starting with the Live Migration network. This is easy to set, a bit tricky to find in the UI. Again, in the Failover cluster manager, right click on the Networks item.

Then select 'Live Migration Settings'. From here you can select the network(s) and set priorities.

How to set the CSV network

To set a preferred network here, you use metric setting in PowerShell. To set the priority network for CSV redirection it needs to be set to 900

(Get-ClusterNetwork "Cluster Network 2").Metric = 900

To verify the setting use the following command

Get-ClusterNetwork | ft Name, Metric, AutoMetric, Role

Configure the CSV volumes

This was interesting to find in the UI in 2008 R2. In 2012 it's so easy it should be illegal.

Simply right click on the disk and select 'Add to Cluster Shared Volumes'

VMs and boot order

With all the different machines and dependencies, the order in which these things are fired up matters. I've included a table of which VM is hosted where and the boot order.

Computer

Host

Boot Number

Notes

Gaming Rig

N/A

1

Win 8 Pro with Hyper-V

DC01

Gaming Rig

2

Resume from saved state, takes about 3-4 seconds to come up.

SAN01

N/A

3

Needs the DC to authenticate.

MDT01

Gaming Rig

4

Hosted on gaming rig so it could be used to PXE build the Hyper-V hosts and SAN

RDS01

Gaming Rig

4

Remote Desktop server where I use published apps to host all the consoles I need. Using RDP to remote into every server you need to do something on is so old school

HyperV01

N/A

5

SAN01 must be up for the cluster to have disk

HyperV02

N/A

5

SAN01 must be up for the cluster to have disk

All other VMs

Cluster01

6

I then resume all the VMs I need to continue whatever I was working on.

 

CSV Block cache and boot times

As you could see from the hardware specs in part 1, I don't have a lot of disk to throw at this config. So the idea is to use every little trick to extract the maximum performance. By far the biggest gain in the lab is to use differential disks and the CSV block cache. I'll go into more detail of what differential disks are, how they work and how to set them up in Part 3. For now I'll explain a little on the CSV block cache and how to enable it.

For an overview of the CSV block cache and a bit more detail, refer to this blog. However, in summary, blocks read from the storage are stored in RAM on the Hyper-V host. When the block is accessed again, it's read directly from memory on the host and not the storage, giving a huge boost in performance. The cache is split 50/50 between common reads and uncommon reads. This ensures the cache is maximising performance for both data read all the time and data read more infrequently. The difference it can make can be quite large.

To illustrate the difference, I performed a demo at TechED Australia in 2012 where I did a comparative test between booting 5 Windows 2012 Server VMs (using differential disks) with the cache off and then with the cache on. To boot the 5 VMs with the cache off took 2 minutes and 38 seconds. With the cache on, the 5 VMs booted in 47 seconds. If you want to see the demo, it's on-line here. Apologies for my voice if you watch the video, I was losing it. To save you some pain, the first part of the demo is at 43:55 and the second part of the demo is at 56:45. If you're in the mood, that session also has a very cool demo showing how a VM can remain running even if it loses its storage using the CSV redirect, and that's at 35:55.

Enabling the cache is done from PowerShell. It's also not on by default (2012. It's enabled by default on 2012 R2), so if you want to utilise it, you'll have to turn it on. The default size of the cache is 512MB, in the lab I've found that allocated a 1GB works well, to do that use the following command:

(Get-Cluster).SharedVolumeBlockCacheSizeInMB = 1024

Please note, that's the command for Windows Server 2012, it's different in 2012 R2

(Get-Cluster). BlockCacheSize = 1024

The Cache is enabled on a CSV by CSV basis. This enables you to be selective on which CSV LUNs you cache. In my lab, I have the parent VHDX on the slow spindle disk with the cache enabled and all the differencing disks on the fast SSD with the cache disabled. This ensures the cache is maximised for the reads from the slow disk and I can save storage on the fast disk.

If you're running 2012, use the following command to enable the cache:

Get-ClusterSharedVolume "Cluster Disk 2" | Set-ClusterParameter  CsvEnableBlockCache 1

To verify the setting, use:

Get-ClusterSharedVolume "Cluster Disk 2" | Get-ClusterParameter  CsvEnableBlockCache

As you can see from the warning, the disk needs to be taken offline and then online again for the change to take effect.

One final point - this is why I use iSCSI to the 2012 SAN rather than SMB 3.0. You can't enable CSV on SMB shares, only on LUNs connected to the hosts. No CSV means no Block Cache, and that's the only reason I don't use SMB. If SMB had a similar caching function, I would certainly use it, as it's much easier to set up.

That wraps up part 2. In Part 3, I'll cover the use of differential disks in the lab and start to cover off the automation I use so I don't spend all my lab time doing grunge work.