Configure NIC teaming and QoS with VMM 2012 SP1 by Kristian Nese

Configure NIC teaming and QoS with VMM 2012 SP1 by Kristian Nese

  • Comments 7
  • Likes

imageNIC Teaming is a new feature included in the Windows Server 2012 operating system that allows you toCloud and Datacenter Solutions Hub team any two NICs. It doesn’t matter if the NICs are different speeds or from different vendors. Windows Server 2012 will team any two NICs and support for NIC Teaming is completely Microsoft’s responsibility – you don’t have to call the vendor or vendors. This is a feature we’ve been wishing to get for a long time, and now we have it!

Another nice cloud infrastructure related feature is QoS. You can use QoS to shape the bandwidth on your NICs so that you can host multiple traffic types on a NIC and make sure that each traffic type gets the bandwidth it needs. For example, you might have an infrastructure network in your cloud and the infrastructure network hosts cluster/CSV, Live Migration, management and storage traffic. The Windows Server 2012 QoS uses industry standard protocols and will work with your current infrastructure.

The trick is to get NIC Teaming and QoS to work together, especially in a System Center Virtual Machine Manager driven environment. That’s where Kristian Nese, a true VMM Ninja, comes in – in the following article he shows you how to make these technologies work together in SCVMM. Enjoy! –Tom.

This article is reposted from Kristian’s blog. If you would like to see more articles like this, please see Kristian’s site Virtualization and Some Coffee.


NIC teaming and QoS is a hot topic in these days.

Windows Server 2012 supports NIC teaming out of the box, and this gives us finally some flexible design options when it comes to Hyper-V and Hyper-V Clustering.

In a nutshell, NIC teaming gives us:

  • Load Balancing
  • Failover
  • Redundancy
  • Optimization
  • Simplicity
  • Bandwidth Aggregation
  • Scalability

imageRequirements for NIC teaming in Windows Server 2012

NIC Teaming requires at least one single Ethernet network adapter, that can be used for separating traffic that is using VLANs. If you require failover, at least two Ethernet network adapters must be present. Up till 32 NICs is supported in Windows Server 2012 NIC teaming.

Configuration

The basic algorithms that’s used for NIC teaming is switch-independent mode where the switch doesn’t know or care that the NIC adapter is participating in a team. The NICs in the team can be connected to different switches.

Switch-dependent mode require that all NIC adapters of the team are connected to the same switch. The common choices for switch-dependent mode is Generic or static teaming (IEEE 802.3ad draft v1) that requires configuration on the switch and computer to identify which links form the team. This is a static configuration so there is no additional assistance to detect incorrectly plugged cables or odd behavior.

Dynamic teaming (IEE 802.1ax, LACP) uses the Link Aggregation Control Protocol to dynamically identify links between the switch and the computer, which gives you the opportunity to automatically create the team, as well as reduce and expand the team.

Traffic distribution algorithms

There are two different distributions methods that Windows Server 2012 supports.

Hashing and Hyper-V switch port.

Hyper-V switch port

When virtual machines have independent MAC addresses, the MAC address provides the basis for dividing traffic. Since the switch can determine the specific source MAC address is on only one connected network adapter, the switch will be able to balance the load (traffic from the switch to the computer) on multiple links, based on the destination MAC address for the VM.

Hashing

Hashing algorithm creates a hash based on components of the packet, and assign packets with the hash value to one of the available network adapters. This ensures that packets from the same TCP stream are kept on the network adapter.

Components that can be used as inputs to the hashing functions include the following:

  • Source and destination IP addresses, with or without considering the MAC addresses (2-tuple hash)
  • Source and destination TCP ports, usually together with the IP addresses (4-tuple hash)
  • Source and destination MAC addresses.

To use NIC teaming in a Hyper-V environment, there are some nice new features available in powershell to separate the traffic with QoS.

More information about this can be found at http://technet.microsoft.com/en-us/library/jj735302.aspx

The scenario I’ll demonstrate in VMM is using NIC teaming with two 2GBe modules on the server.

Overview

  • We will create a single team on the host
  • We will create several virtual NICs for different traffic, like SMB, Live Migration, Management, Cluster and Guests

System Center Virtual Machine Manager is the management layer for your virtualization hosts, and Service Pack 1 will support management of Hyper-V hosts in Windows Server 2012. This will also include the concept of converged fabric and network virtualization.

The catch is that you must create the team with Virtual Machine Manager. If the team is created outside of VMM, VMM will not be able to import the configuration properly, and reflect the changes you make.

Pre-requisites

Create LACP trunk on physical switches

Set default VLAN if not 1

Allow required VLANs on trunk

Configure Logical Networks in Fabric

This is the first mandatory step.

1. Create logical networks for all the actual networks you will be using. This means management, cluster, live migration, iSCSI, SMB and so on. Configure sites, VLAN/Subnet and eventually IP Pools for those networks, so that VMM can assign IP addresses to the vNics you will create.

For all your routable networks, configure default gateway, DNS suffix and DNS in the pool.

2. Associate the logical networks with your physical network adapters on your hosts

Configure VM Networks using the Logical Networks created in Fabric

1. Navigate to VMs and Services within the VMM console

2. Right click on VM Networks and select create VM Networks

3. Assign the VM Network a name, reflecting the actual logical network you are using, available from the drop down list and click next

4. Select No Isolation, since we will be using the actual network as the basis in this configuration

5. Click finish, and repeat the process for every network your will use in your configuration

Configure Native Port Profiles

We will create Native Port Profiles both for the physical NICs used in the team, and the vNics, and group the profiles in a logical switch that we will apply to the hosts.

Creating Uplink Port Profile

1. Navigate to Native Port Profiles in Fabric, right click and create new Native Port Profile

2. Select ‘Uplink Port Profile’ first and choose algorithms for configuration and distribution. I will use switch independent and HyperVPort

3. Select the appropriate network sites, and enable network virtualization if that’s a requirement. This will instruct the team that network virtualization should be supported, and enable the network virtualization filter driver on the adapter.

4. Click finish

Creating virtual network adapter port profile

1. Repeat the process, and create a new Native Port Profile

2. Select ‘Virtual network adapter port profile’ and assign a name. We will repeat this process for every vNIC we will need in our configuration, reflecting the operation we did with VM networks earlier

3. Go through the wizard and select offload settings and security settings for the vNIC you will use for virtual machines, and specify bandwidth (QoS) for the different workloads

Repeat this process for every vNIC

Creating Port Classifications

1. We need to classify the different vNICs, so navigate to Port Classification in Fabric, right click, and select ‘Create new Port Classification’.

2. Assign a name and eventually a description.

Repeat this process for every vNIC

Creating Logical Switch

The logical switch will group our configuration, and simply the creation of NIC teaming and vNICs on the hosts.

1. In Fabric, right click on Logical Switch, and select ‘Create new Logical Switch’.

2. Assign a name and click next

3. Choose the extensions you want, and click next

4. Specify which uplink profiles that should be available for this logical switch. Here you can decide if the logical switch should support teaming. If so, enable ‘team’ and add the uplink profile you created earlier, and click next.

5. Specify the port classifications for virtual ports part of this logical switch. Add the virtual network adapter port profiles, with their corresponding port classifications you created earlier, in this step of the wizard. Click next and finish.

You have now created a logical switch that you will apply to your Hyper-V hosts

Creating a Logical Switch and virtual network adapters on your Hyper-V hosts

Navigate to your hosts in Fabric, right click and click properties.

1. Navigate to ‘Virtual Switches’ on the left, and click ‘New Virtual Switch’ and select ‘New Logical Switch’.

2. Select the Logical Switch you created earlier in the drop down list, and add the physical adapters that should be joining this team.

3. Click ‘New Virtual Network Adapter’ and give the vNIC a name, select VM Network for connectivity (created earlier), enable VLAN, assign IP configuration (choose static if you want VMM to handle this from the IP pool), and select port profile.

Repeat this process and create a new virtual network adapter for all your vNICs and map them to their corresponding networks and profiles.

Important:
If you want to transfer the IP address that's currently used as management IP to a virtual network adapter, remember to mark the option 'This virtual network adapter inherits settings from the physical management adapter'.

Once you have configured this, click ‘OK’ and VMM will create the team and it’s vNICs on the host.

For the management vNIC, you can assign the IP address you are currently using on the physical NIC, and VMM will place this on the vNIC during creation.

You have now created converged fabric using VMM, and enabled your hosts to leverage network virtualization.

I would like to thank Hans Vredevoort for the healthy discussions we're having on the topic, and providing me with insight and tips for the configuration. Hans Vredevoort is a Virtual Machine MVP based in Holland and is one of the most experienced fellow I know when it comes to Fabric (servers, storage and networking). He was formerly a Cluster MVP, but has now devoted his spare time to participate in the VM community. Read his useful blogs at hyper-v.nu

Kristian Nese


Was that great or what? Hope you enjoyed the article and if you have questions or comments, make sure to enter them in the comments box below. Thanks!

Tom

Tom Shinder
tomsh@microsoft.com
Principal Knowledge Engineer, SCD iX Solutions Group
Follow me on Twitter: http://twitter.com/tshinder
Facebook:
http://www.facebook.com/tshinder
image


Go Social with Building Clouds!
Private Cloud Architecture blog
Private Cloud Architecture Facebook page
Private Cloud Architecture Twitter account
Private Cloud Architecture LinkedIn Group
Private Cloud TechNet forums
TechNet Private Cloud Solution Hub
Private Cloud on the TechNet Wiki

Your comment has been posted.   Close
Thank you, your comment requires moderation so it may take a while to appear.   Close
Leave a Comment
  • I am currently deploying SCVMM SP1 and server 2012. From my experience, the trickiest thing is when the host only have 2 NICs. If not careful, the SCVMM would lost the communication with the Hyper-V host. The workaround is to use one nic for the ip address of Hyper-v, then create the logical switch with uplink using the other NIC, create a virtual network adapter for management traffic. Then join the first NIC to the uplink created by logical switch. Notice that it would requires 2 ip address for the Hyper-v in the whold process. VMware solve this problem elegantly by simply migrating the mangement ip address to the distribution switch.

    Another thing is, it seems that when using static ip adress for virtual network adapter, mac address must be manually set ! (The screenshot in the blog post confirms what I said)

  • @Noubadi Hi and thank your for your comment. To not loose connection on the management NIC, you must select settings on the virtual network adapter, that inherit the settings from the physical adapter. Then VMM will transfer the IP from the physical nic, to a virtual nic (management NIC).

    Just make sure you have enabled trunk on the ports prior to this, and configured your uplink profiles correctly.

    -kn

  • Broadcom has issues with NIC teaming and VMQ:

    www.flexecom.com/high-ping-latency-in-hyper-v-virtual-machines

  • We have 4 NICs on a single subnet . What would be the best way to create logical networks and IP address pools for all 4 of the nics so that they can be teamed . In other words, each of the logical networks must have an IP address range defined . A range of ip addresses should be assigned to each logical network before creating classifications/port uplings and virtual switches ? ... Please let me know ....

  • Great Post Kristian. Thanks!

    Can you clarify if the pre-requisite 'Create LACP trunk on physical switches' is definitely required? If its switch independent mode as is in the screenshots above.

    Although we have had an unrelated issues(I think), was wanting to check and ensure with LACP., as we are looking at all the touch points.

    In our VMM 2012 SP1 and Server 2012 environment, we have errors on VMM and Hyper-V when the NICS are teamed using Logical Switches as exactly in the screenshots above. (Event ID 25259 error on VMM, suggesting server NICS to be configured), and Event ID 106 on Server with an error on the overlap of processor sets in sum-of -queue mode.

    Thanks in Advance.

    Thanks

    SS

  • Great Post Kristian. Thanks!

    Can you clarify if the pre-requisite 'Create LACP trunk on physical switches' is definitely required? If its switch independent mode as is in the screenshots above.

    Although we have had an unrelated issues(I think), was wanting to check and ensure with LACP., as we are looking at all the touch points.

    In our VMM 2012 SP1 and Server 2012 environment, we have errors on VMM and Hyper-V when the NICS are teamed using Logical Switches as exactly in the screenshots above. (Event ID 25259 error on VMM, suggesting server NICS to be configured), and Event ID 106 on Server with an error on the overlap of processor sets in sum-of -queue mode.

    Thanks in Advance.

    Thanks

    SS

  • Hi Kristian,

    Thanks for the great post, I found it very helpful.  I have one small issue though.  I currently have 2 Hyper-V 2012 Clusters (5 nodes each).  These are using dual 8Gb Fibre for SAN access using MPIO, dual 1Gb for Management using Windows Teaming, dual 10GbE using SCVMM Logical Switch, splitting out Live Migration and CSV virtual interfaces.   Everything works great internal to the clusters, but when I try to Live Migrate between the clusters using SCVMM I'm capped at 1.2Gbps on my LM network.  I have confirmed that it is going over the 10Gb network.  Near the end of the migration it spikes to 2-3Gbps, but for the most part it sits at around 800Mbps to 1.2Gbps.  Is this a setting I need to enable/tune?

    I have enabled-NetAdapterChecksumOffload and disabled-NetAdapterBinding -ComponentID ms_netftflt on all my 10Gb interfaces.

    I have set the MinimumBandwidthWeight to 90 on LM and 10 on CSV

    We are using SCVMM SP1 Rollup1

    Any suggestions?