In parts 1 through 4, I covered the external dependencies and the “why” of SR-IOV. So it’s about time I showed you how to setup SR-IOV and what it looks like in a little more detail from a configuration perspective, both through the user interface in Hyper-V Manager, and from PowerShell.
In part one, I showed a block diagram of how out the indirect I/O model works for virtual machine networking. Here’s a similar block-diagram showing, at an extremely high level, how this changes with SR-IOV. For simplicity, I am showing a single VM with a single VF.
A few key points I want to bring out from this diagram:
The first step when allowing a virtual machine to have connectivity to a physical network is to create an external virtual switch using Virtual Switch Manager in Hyper-V Manager. The additional step that is necessary when using SR-IOV is to ensure the checkbox is checked when the virtual switch is being created. It is not possible to change a “non SR-IOV mode” external virtual switch into an “SR-IOV mode” switch. The choice must be made a switch creation time.
This can also be done through PowerShell using New-VMSwitch. New-VMSwitch requires a parameter to specify the physical network adapter which is going to be used. The physical network adapters can be identified using Get-NetAdapter. In the following screenshot, I have a machine which has multiple physical NICs, one which is an onboard NIC, not capable of SR-IOV, and two dual-port PCI Express 10G NICs which are capable of supporting SR-IOV. Note that I have given some of the adapters “friendly names” using the network control panel (ncpa.cpl).
The following two screenshots show the different ways to use New-VMSwitch to create a virtual switch bound to SR-IOV capable network adapters from the previous screenshot. Note the use of the -EnableIov parameter
Let’s look at the properties we expose on the VMNetworkAdapter object in more detail.
IovEnabled: True if the switch is created in SR-IOV mode, False otherwise
IovVirtualFunctionCount: The number of VFs that are available for use by virtual machines. This will vary by vendor. Note that each software based NIC can be backed by a VF, and each VM can have up to 8 software based NICs.
IovVirtualFunctionsInUse: The number of VFs currently being used by running VMs. In the screenshot, the number is 1 as I have a single running VM with a single software based NIC in SR-IOV mode.
IovQueuePairCount: The number of queue pairs available as hardware resources on the physical NIC. This will vary by vendor. There will be as many queue pairs available as there are VFs, although some vendors may have more queue pairs available than there are VFs. I recommend you generally think of a VF as the entity being assigned to a virtual machines network adapter rather than one or more queue pairs. However, a VF requires at least one queue pair to operate. If the NIC vendor supports additional features such as RSS in a VM backed by a VF, more than one queue pair may be required for a VF. For more information, you should consult NIC vendor guidance.
IovQueuePairsInUse: The number of hardware queue pairs currently allocated to VFs assigned to running VMs.
IovSupport/IovSupportReasons: Array of numeric codes and descriptions regarding the status of the network adapter. More information on these properties will be covered in the “debugging why SR-IOV doesn’t work” part of this series.
Once a virtual switch has been created, the next step is to configure a virtual machine. SR-IOV in Windows Server “8” is supported on x64 editions of Windows “8” as a guest operating system (as in Windows “8” Server, and Windows “8” client x64, but not x86 client). We have rearranged the settings for a virtual machine to introduce sub-nodes under a network adapter, one of which is the hardware acceleration node. At the bottom is a checkbox to enable SR-IOV.
Under the covers, this checkbox is setting a property, IovWeight. This is identical in functionality to VMQWeight in Windows Server 2008 R2, and expresses a desire for a hardware offload, not a guarantee. A positive number between 1 and 100 is “on”, and 0 is “off”. We do not, in Windows Server “8” use a relative weighting system. All numbers between 1 and 100 mean the same. This design allows us to add ‘weighting’ functionality in the future without needing to change APIs.
As for switch creation, enabling SR-IOV on a virtual machines virtual network adapter can be done through Powershell using Set-VMNetworkAdapter by setting the IovWeight property as per the following screenshot.
Assuming you have all the requirements met for SR-IOV, you will see the status change on the networking tab in Hyper-V Manager for a selected VM to “OK (SR-IOV active)”
Let’s go back to that previous PowerShell output and examine the SR-IOV related properties of the VMNetworkAdapter object.
IovWeight: Discussed above
IovQueuePairsRequested/IovQueuePairsAssigned: These are for advanced networking features for a VF. One example is for RSS in a virtual machine (when backed by a VF), and requires that the physical network adapter itself supports RSS on a VF. Note that this is the first time we have been able to achieve RSS in a VM. While this series of posts isn’t about RSS, its benefits, or how to configure it, it’s worth a little diversion. More information about RSS, first introduced in Windows Server 2008, can be found here.
By default, the IovQueuePairsRequested will be set to 1, and it can never be less than 1. If the VF hardware supports RSS and you have a multi-processor VM, you can use this parameter to request additional queue pairs from the set of hardware resources available to allow the VM to scale. It is, however, a request, and the actual number of queue pairs assigned may be less, depending on hardware resources. The number assigned will be in IovQueuePairsAssigned.
IovInterruptModeration: Many modern physical NICs have an advanced property to allow the driver to be able to moderate interrupts. As there are now multiple functions (PF and VFs) which process interrupts, this property allows the VF driver to be able to adapt depending on load. The underlying implemented is up to the driver writer. Hence you should refer to the NIC vendor for guidance as to whether this is implemented, or what the recommended setting should be according to workloads running. The possible values are Default; Adaptive; Off; Low; Medium and High. In most cases keeping the default of “Default” will be sufficient.
IovUsage: Will have value 1 if a VF is actively being used by a VM, 0 otherwise.
Status/StatusDescription: Array of numeric codes and descriptions regarding the status of the network adapter. These are not exclusive to SR-IOV, although we do populate them when IovWeight is set but not working correctly. More information on these properties will be covered in the “debugging why SR-IOV doesn’t work” part of this series.
VirtualFunction: This provides a lot more information about the VF itself, but to all intents and purposes, you can ignore this property. However, it could be potentially useful to scripters to be able to tie back to the physical interface being used on the system through the ifAlias and ifDesc properties. For those who really want to know the full gory details of this object, here’s the full output:
VirtualFunctionsAssigned: This will be deprecated before final release and can be ignored in Windows Server “8” Beta. IovUsage is the parameter to use.
So that pretty much covers the user interface and PowerShell side of SR-IOV configuration. In the next part, I’ll cover Live Migration and show SR-IOV in action with a short video.
Can you please explain host virtual machines communicates with each other in the case that they are located on the same host? Are they communication through virtual switch, the one with enable SR-IOV? Are they communicate through physical switch? Or they
need one internal switch for this kind of communication?
Novak - I allude to this at the end of part 2. Essentially the physical NIC has a hardware embedded switch which does the VF-VF routing between two SR-IOV enabled VMs running on the same host.
Thank you for an answer. One more question. Is there VFs time sharing between VMs? For example, if we have more VMs than VFs, are they gonna share VFs, or the usage of VFs are exclusive, which means that the number of VMs are limited by the number of VFs?
In this context, VFs are fixed resources in the hardware, no time sharing.
Thank you very much Sir.
If NVGRE is used on a virtual network interface, will the virtual network interface traffic take the SR-IOV path or instead go through the Hyper-V Switch path?