Sniper-V

Welcome to my public storage of specific solutions targeted at everyday technical challenges I see in the field!

Windows Server 2008 R2 SP1 Hyper-V Best Practices and Tuning

Windows Server 2008 R2 SP1 Hyper-V Best Practices and Tuning

  • Comments 8
  • Likes

Thanks again to Roger Osborne for compiling a list of some common best practices when deploying production Hyper-V servers. Of course the Hyper-V will run for you right out of the box but depending on your environment, the information below can dramatically increase scalability and performance in your environment, thanks Roger!!

 

Disclaimer: As with all Best Practices and Tuning, not every recommendation can – or should – be applied. Best Practices are general guidelines, not hard, fast rules that must be followed. As such, you should carefully review each item to determine if it makes sense in your environment. If implementing one (or more) of these Best Practices seems sensible, great; if it doesn't, simply ignore it. In other words, it's up to you to decide if you should apply these in your production environment after thorough testing.

 

 

GENERAL (HOST):

 

* Use Server Core if possible, to reduce OS overhead, reduce potential attack surface, and to minimize reboots (due to fewer software updates)

 

* Hyper-V services should be configured to start automatically, to ensure uninterrupted VM services after reboots. (Verify in Administrative Tools à Services):

  • Hyper-V Virtual Machine Management Service                (To set to auto:   sc config vmms start=auto)
  • Hyper-V Networking Management Service                        (To set to auto:    sc config nvspwmi start=auto)
  • Hyper-V Image Management Service                                    (To set to auto:    sc config vhdsvc start=auto)

 

* Ensure hosts are up-to-date with recommended Microsoft updates, to ensure critical patches and updates – addressing security concerns or fixes to the core OS – are applied.

 

* Ensure all applicable Hyper-V hotfixes and Cluster hotfixes (if applicable) have been applied

 

* Install the latest PowerShell version (currently 2.0) on each Hyper-V host

 

* Download and install the Hyper-V PowerShell Management Library

 

* Ensure hosts have the latest BIOS version, to address any known issues/supportability

 

* Host should be domain joined, unless security standards dictate otherwise.  Doing so makes it possible to centralize the management of policies for identity, security, and auditing. Additionally, hosts must be domain joined before you can create a Hyper-V High-Availability Cluster.

 

* RDP Printer Mapping should be disabled on hosts, to remove any chance of a printer driver causing instability issues on the host machine.

  • Preferred method: Use Group Policy with host servers in their own separate OU)
    • Computer Configuration à Administrative Templates à Windows Components à Remote Desktop Services à Remote Desktop Session Host à Printer Redirection à Do not allow client printer redirection à Set to "Enabled"

 

* Set host power plan to Maximum Performance, to ensure maximum CPU performance.

  • Preferred method: Use Group Policy with host servers in their own separate OU)
    • Computer Configuration à Preferences à Control Panel  Settings à Power Options
      • Once there, create a new Power  Plan (using the Vista or higher selection) and assign it "High Performance"

 

* Do not install any other Roles on a host besides the Hyper-V host

  • When the Hyper-V role is installed, the host OS becomes the "Parent Partition" (a quasi-virtual machine), and the Hypervisor partition is placed between the parent partition and the hardware. As a result, it is not recommended to install additional roles, services, etc.

 

* The only Features that should be installed on the host are: Failover Cluster Manager and Multipath I/O. However, both are only needed if the host will be part of a cluster. (See explanation above for reasons why installing additional features is not recommended.)

 

* Anti-virus software can be installed, if desired; however, be sure to exclude Hyper-V specific files using KB 961804:

  • http://support.microsoft.com/kb/961804
    • Default virtual machine configuration directory (C:\ProgramData\Microsoft\Windows\Hyper-V)
    • Custom virtual machine configuration directories, if applicable
    • Default virtual hard disk drive directory
    • Custom virtual hard disk drive directories
    • Snapshot directories
    • Vmms.exe (Note: May need to be configured as process exclusions within the antivirus software)
    • Vmwp.exe (Note: May need to be configured as process exclusions within the antivirus software)
    • Additionally, when you use Cluster Shared Volumes, exclude the CSV path "C:\ClusterStorage" and all its subdirectories.

 

* Default VM path and VHD path should be set to a non-system drive, due to this can cause disk latency, as well as create the potential for running out of disk space.

 

* Enable iSCSI Service TCP-In (for Inbound) and iSCSI Service TCP-Out (for outbound) in Firewall settings on host (Port 3260), to allow iSCSI traffic to pass to and from host and SAN device. Not enabling these rules will prevent iSCSI communication.

 

* Periodically run performance counters against the host, to ensure optimal performance.

Recommend using the Hyper-V R2 SP1 performance counter that can be extracted from the (free) Codeplex PAL application:

  • http://pal.codeplex.com/
  • Install PAL on a workstation and open it, then click on the Threshold File tab.
    • Select "Microsoft Hyper-V R2 SP1" from the Threshold file title, then choose Export to Perfmon template file. Save the XML file to a location accessible to the Hyper-V host.
  • Next, on the host, open Server Manager à Diagnostics à Performance à Data Collector Sets à User Defined. Right click on User Defined and choose New à Data Collector Set. Name the collector set "Hyper-V Performance Counter Set" and select Create from a template (Recommended) then choose Next. On the next screen, select Browse and then locate the XML file you exported from the PAL application. Once done, this will show up in your User Defined Data Collector Sets.
  • Run these counters in Performance Monitor for 30 minutes to 1 hour (during high usage times) and look for disk latency, memory and CPU issues, etc.

 

* If server has more than 32 physical cores, do not enable Hyper Threading, as it creates more logical cores than Hyper-V supports on Server 2008 R2. (Max is 64.)

 

PHYSICAL NICs:

 

* Ensure NICs have the latest firmware, which often address known issues with hardware.

 

* Ensure latest NIC drivers have been installed on the host, which resolve known issues and/or increase performance.

 

* Consider disabling Chimney Offload, as it has been found to cause slowness of virtual machines.

 

* Jumbo frames should be turned on and set for 9000 or 9014 (depending on your hardware) for CSV, iSCSI and Live Migration networks. This can significantly increase throughput while also reducing CPU cycles.

  • End-to-End configuration must take place – NIC, SAN, Switch must all support Jumbo Frames.
  • You can enable Jumbo frames when using crossover cables (for Live Migration and/or Heartbeat), in a two node cluster.
  • To verify Jumbo frames have been successfully configured, run the following command from your Hyper-V host(s):
    • Ping sanIP –f –l 8000
      • This command will ping the SAN sanIP (e.g. 192.168.1.130) with an 8K packet from the host. If replies are received, Jumbo frames are properly configured.

 

* NICs used for iSCSI communication should have all Networking protocols (on the Local Area Connection Properties) unchecked, with the exception of:

  • Manufacturers protocol (if applicable)
  • Internet Protocol Version 4
  • Internet Protocol Version 6.

Unbinding other protocols (not listed above) helps eliminate non-iSCSI traffic/chatter on these NICs.

 

* When creating virtual switches, uncheck the Allow management operating system to share this network adapter, in order to create a dedicated network for your VM(s) to communicate with other computers on the physical network.

 

* Recommended network configuration when clustering:

 

Min # of Networks on Host

Host Management

VM Network Access

CSV

Live Migration

iSCSI

5

“Management”

“Production”

“CSV”

“LiveMigration”

“iSCSI”

 

** CSV & Live Migration Networks can be crossover cables, if you are building a 2 node cluster **

 

  

VIRTUAL NETWORK ADAPTERS (NICs):

 

* Legacy Network Adapters (a.k.a. Emulated NIC drivers) should only be used for PXE booting a VM or when installing non-Hyper-V aware Guest operating systems. Hyper-V's synthetic NICs (the default NIC selection; a.k.a. Synthetic NIC drivers) are far more efficient, due to using a dedicated VMBus to communicate between the virtual NIC and the physical NIC; as a result, there are reduced CPU cycles, as well as much lower hypervisor/guest transitions per operation.

 

 

DISK:

 

* Disks should be Fixed or Pass-Through in a production environment, to increase disk throughput. Differencing and Dynamic disks are not recommended for production, due to possible data loss (differencing disks) and increased disk read/write latency times (differencing/dynamic disks).

 

* Disable snapshots from all production VMs. Snapshots can cause disk space issues, as well as additional physical I/O overhead.

  • Set the snapshot path for each VM to a non-existent location, so user gets an error if they attempt to create a snapshot
  • If snapshots are required, the snapshot location should not be the host OS drive.

 

* The physical format of hard disk drives used for hosting VMs should be 512-byte sectors, to prevent compatibility issues (see http://support.microsoft.com/kb/2515143).

It is not recommended to use 512e formatting for disks that will house VHDs, due to internal testing has shown a performance degradation of around 30% for most workloads.

Regarding 4K Disks:

“The VHD driver in Server 2008 R2 assumes that the physical sector size of the disk to be 512 bytes and issues 512 byte IOs, which makes it incompatible with these disks. The VHD stack fails to open the VHD files on physical 4kB sector disks for this reason.”

Taken from: http://support.microsoft.com/kb/2515143

 

Side-Note: Windows 8 fully supports 4K disks out of the box.

 

* Page file on Hyper-V Host should be set to a fixed size (4GB max) on the system drive, since most Hyper-V implementations have large amounts of physical memory, and, by default, the page file is the same size as the physical amount of memory.

  • Can be placed on a SAN drive, if desired
  • Should not be on a VM volume, to reduce possible disk latency if page file is being used by host

 

 

MEMORY:

 

* Set reserved Hyper-V Parent Host memory, to ensure memory is set aside for the host, itself.

  • To determine minimum host memory reserve, follow these guidelines:
    • Use the following calculation:
      • 384 + (30 * Physical Memory)
      • For example: 384 + (30*48) = 1824GB min reserve recommendation on host with 48GB memory
  • To set memory reserve, change the following:
    • Registry Key:
      • HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization\MemoryReserve
        • Value is set in Decimal, and is in megabytes (e.g. 4096)
        • Requires a reboot to take effect
  • 2-4 GB Minimum on average

 

* Use Dynamic Memory on all VMs (unless not supported.   (e.g. Lync).

  • Dynamic Memory adjusts the amount of memory available to a virtual machine, based on changes in memory demand using a memory balloon driver, which helps use memory resources more efficiently.

 

* Guest OS should be configured with (minimum) recommended memory

  • 2048GB For Windows Server 2008, including R2 (e.g. 2048 - 4096 Dynamic Memory)
  • 1024GB For Windows 7 (e.g. 1024 - 2048 Dynamic Memory)
  • 1024GB For Windows Vista (e.g. 1024 - 2048 Dynamic Memory)
  • 256MB For Windows Server 2003 (e.g. 256 - 2048 Dynamic Memory)
  • 128MB For Windows XP (e.g. 128 - 2048 Dynamic Memory)

 

* Ensure Integration Components (IC) have been installed on all VMs (Pre 2008/Pre Win 7/Other OS).  IC's significantly improve interaction between the VM and the physical host.

  • Enlightened OS's (Server 2008 or higher, Windows 7 or higher) don't need IC installed.

 

 

CLUSTER:

 

* Set preferred network for CSV communication, to ensure the correct network is used for this traffic

  • The lowest metric in the output generated by the following PowerShell command will be used for CSV traffic
    • Get-ClusterNetwork | ft Name, Metric, AutoMetric, Role
      • This will return a listing of networks used by the host, as well as the metric assigned
  • In order to change which network interface is used for CSV traffic, use the following PowerShell command:
    • (Get-ClusterNetwork "CSVNetwork").Metric=900
      • This will set the network named "CSVNetwork" to 900

 

* Set preferred network for Live Migration, to ensure the correct network(s) are used for this traffic:

  • Open Failover Cluster Manager, Expand the Cluster then Expand Services and applications
  • Under Services and applications, click once on any of the VMs listed in the left pane
  • Next, in the middle pane (under the title “Virtual Machine”), right click your VM and choose properties
  • Click on the Network for live migration tab
    • Use the Up / Down buttons to list the networks in order from most preferred (at the top) to least preferred (at the bottom)
    • Uncheck any networks you do not want used for Live Migration traffic
    • Select Apply and then press OK
  • Once you have made this change, it will be used for all VMs in the cluster

 

* The Cluster Shutdown Time (ShutdownTimeoutInMinutes registry entry) should be set to an acceptable number

  • Default is set using the following calculation (which can be too high, depending on how much physical memory is installed)
    • (100 / 64) * physical RAM
    • For example, a 96GB system would have 150 minute timeout!    (100/64)*96 = 150
  • Might suggest setting the timeout to 10 or 15 minutes, depending on the number of VMs
    • Registry Key: HKLM\Cluster\ShutdownTimeoutInMinutes   
      • Enter minutes in Decimal value.
      • Requires a reboot to take effect

 

* Each node in the cluster requires an identically named (case sensitive!) virtual switch. Failovers and Live Migrations will fail without identically named switches

  • If any changes are made to the virtual switch – on any node – you must refresh the virtual machine configuration (Failover Cluster Manager à  <Cluster Name>  à Services and applications à  <VM name>  à More Actions (in right pane) à  Refresh virtual machine configuration). Repeat this process for each VM listed in Services and applications.

 

* Run Cluster Validation periodically to remediate any issues

  • NOTE: If all LUNs are part of the cluster, the validation test will skip all disk checks. It is recommended to set up a small test-only LUN and share it on all nodes, so full validation testing can be completed.

 

 

VITRUAL DOMAIN CONTROLLERS (DCs):

 

* It is recommended to partially disable the time synchronization between the VM DC and the host (using registry change). This enables the guest DC to synchronize time for the domain hierarchy, but protects it from having a time skew if it is restored from a saved state:

  • On the virtual DC, enter the following from an elevated command-prompt:

 

reg add HKLM\SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\VMICTimeProvider /v Enabled /t reg_dword /d 0

 

  • Once done, you can leave the "Time Synchronization" enabled on Integration Services, under the DC's Hyper-V Settings

 

* DC VMs should have "Shut down the guest operating system" in the Automatic Stop Action setting applied (in the settings on the Hyper-V Host)

 

* If VHDs are IDE/ATA drives, ensure disk write caching is disabled, to reduce the chance of AD corruption.

 

Comments
  • don't think the following is accurate

    "Use Server Core if possible, to reduce OS overhead, reduce potential attack surface, and to minimize reboots (due to fewer software updates)"

    it needs to be rebooted almost everytime a non-core version needs to be rebooted, its slightly more secure and is more performant but enough to deal with the headaches server-core causes.

  • other then that awesome post

  • This is excellent! Thank you for sharing!

  • Have a question. When the host is running with latest service pack can the guest be without the latest servicepack

  • nice article, i especially like the disclaimer about "best practice" isnt ment to make people not think about their needs and environment.

    small typo though i think "For example: 384 + (30*48) = 1824GB min reserve recommendation on host with 48GB memory" should be MB

  • @pnk: yes, just as you can have windows 2003 or other os guests on your hyper-v host, you can have the host on sp1 and the guest on sp0. (though the recommendation would be to have all systems on the latest patchlevel)

  • In clusters, make sure you have a Domain Controller that "lives" outside the cluster! DO NOT EVER run a failover cluster environment where your only DC(s) are virtual machines INSIDE that cluster!

    If your cluster fails or is shut down, it must talk to a DC prior to coming back online. If the DC is unreachable, then the cluster host cannot start the cluster services. Therefore it cannot start the clustered DC virtual machine. Therefore it cannot talk to the DC and start the cluster.

    We had a power failure that restarted our cluster. The only DC was a VM inside the cluster. We eventually had to restore the DC from backup to a hard drive outside the cluster, start that backup, THEN bring the cluster back online.

    We now have 3 DCs. One physical, one a VM outside the cluster/SAN, and one inside the cluster on a SAN.

  • In memory recommendation is it correct that reserve shall be in GBs.

Your comment has been posted.   Close
Thank you, your comment requires moderation so it may take a while to appear.   Close
Leave a Comment