A blog by Jose Barreto, a member of the File Server team at Microsoft.
All messages posted to this blog are provided "AS IS" with no warranties, and confer no rights.
Information on unreleased products are subject to change without notice.
Dates related to unreleased products are estimates and are subject to change without notice.
The content of this site are personal opinions and might not represent the Microsoft Corporation view.
The information contained in this blog represents my view on the issues discussed as of the date of publication.
You should not consider older, out-of-date posts to reflect my current thoughts and opinions.
© Copyright 2004-2012 by Jose Barreto. All rights reserved.
Follow @josebarreto on Twitter for updates on new blog posts.
Question via e-mail:
I am using blade servers for my Hyper-V cluster and I can only have two NICs per blade in this configuration.
I am considering two options on how to configure the NICs:
1) Use one NIC for internal network and one NIC for external network, connected to the virtual switch2) Team the two NICs together and use the same path for all kinds of traffic
What would you recommend?
If you're using clusters, I assume you're concerned with high availability and network fault tolerance. In this case using one NIC for each kind of traffic creates two single points of failure. You should avoid that.
I would recommend that you team the two NICs, connect the team to the virtual switch and add a few virtual NICs to the parent partition for you storage, migration, cluster and management traffic. You can then use QoS policies manage your quality of service.
If you're using SMB for storage, be sure to have multiple vNICs (one for each physical NIC behind the team), so you can properly leverage SMB Multichannel in combination with NIC teaming. By the way, SMB Direct (RDMA) won't work with this scenario.
The first thing you want to do is create a team out of the two NICs and connect the team to a Hyper-V virtual switch. For instance:
New-NetLbfoTeam Team1 –TeamMembers NIC1, NIC2 –TeamNicName TeamNIC1New-VMSwitch TeamSwitch –NetAdapterName TeamNIC1 –MinimumBandwidthMode Weight –AllowManagementOS $false
Next, you want to create multiple vNICs on the parent partition, one for each kind of traffic (two for SMB). Here's an example:
Add-VMNetworkAdapter –ManagementOS –Name SMB1 –SwitchName TeamSwitchAdd-VMNetworkAdapter –ManagementOS –Name SMB2 –SwitchName TeamSwitchAdd-VMNetworkAdapter –ManagementOS –Name Migration –SwitchName TeamSwitchAdd-VMNetworkAdapter –ManagementOS –Name Cluster –SwitchName TeamSwitchAdd-VMNetworkAdapter –ManagementOS –Name Management –SwitchName TeamSwitch
After this, you want to configure the NICs properly. This will include setting IP addresses, creating separate subnets for each kind of traffic. You can optionally put them each on a different VLAN.
Since you have lots of NICs now and you're already in manual configuration territory anyway, you might want to help the SMB Multichannel by pointing it to the NICs that should be used by SMB. You can do this by configuring SMB Multichannel constraints instead of letting SMB try all different paths. For instance, assuming that your Scale-Out File Server name is SOFS, you could use:
New-SmbMultichannelConstraint -ServerName SOFS -InterfaceAlias SMB1, SMB2
Last but not least you might also want set QoS for each kind of traffic, using the facilities provided by the Hyper-V virtual switch. One way to do it is:
Set-VMNetworkAdapter –ManagementOS –Name SMB1 –MinimumBandwidthWeight 20Set-VMNetworkAdapter –ManagementOS –Name SMB2 –MinimumBandwidthWeight 20Set-VMNetworkAdapter –ManagementOS –Name Migration –MinimumBandwidthWeight 20Set-VMNetworkAdapter –ManagementOS –Name Cluster –MinimumBandwidthWeight 5Set-VMNetworkAdapter –ManagementOS –Name Management –MinimumBandwidthWeight 5Set-VMNetworkAdapter –VMName * -MinimumBandwidthWeight 1
There is a great TechNet page with details on this and other network configurations at http://technet.microsoft.com/en-us/library/jj735302.aspx
Great article, thanks, Jose.