...building hybrid clouds that can support any device from anywhere
Last week I had the pleasure of attending TechEd Orlando and staffing the Windows Networking booth. We had a lot of people coming to the booth with great questions about networking technologies or just curious to see what’s new. One of the most common questions that came up was – “I heard you now have NIC teaming in the box. Can you show me? How does it work?”
Obviously, when you’re building a private cloud, having high availability at the network level is a critical functionality, and doing it without the need to rely on 3rd party drivers or having to do some complicated configurations is another way how Windows Server 2012 is truly a great platform to build your clouds with.
In this blog, Don Stanwyck, the program manager of the NIC teaming feature, explains this feature in depth.
Windows Server 2012 is a cloud-optimized OS, delivering the key capabilities for building scalable, mission-critical cloud environments. The new NIC Teaming capability in this release of Windows Server bring continuous network availability and increased network performance, supporting greater VM density and lower operational costs.
Cloud environments need continuous availability to meet the demanding needs of high-density workloads. In particular, downtime in network connectivity creates ill will with users and may affect revenue. Service outages can result from network interface card (NIC) failures, network switch failures, or even something as mundane as an accidentally disconnected cable. The trend to increase the density of VMs on a physical server places even more pressure on the underlying network reliability – because a single outage can impact a variety of services. As a result, datacenter operators require solutions that can quickly and automatically recover from connectivity failures, while being easy to setup and manage.
At the same time, greater VM densities—and the push to virtualize demanding workloads such as media streaming—place new requirements for bandwidth aggregation across the datacenter. In particular, having invested in redundant network connectivity, datacenter operators should be able to make use of that spare capacity without requiring special hardware or changes to application workloads. This aggregated network capacity ultimately reduces network infrastructure investment and improves resource utilization.
Windows Server 2012 NIC Teaming provides transparent network failover and bandwidth aggregation. Uniquely, the Windows solution is hardware-independent and can be deployed under all existing workloads and applications on both physical and virtualized servers.
What is NIC Teaming?
A solution commonly employed to solve the network availability and performance challenges is NIC Teaming. NIC Teaming (aka NIC bonding, network adapter teaming, Load balancing and failover, etc.) is the ability to operate multiple NICs as a single interface from the perspective of the system. In Windows Server 2012, NIC Teaming provides two key capabilities:
Many vendors have provided NIC teaming solutions for Windows Server, but these solutions shared many limitations. Those solutions are typically tied to a particular NIC manufacturer, so you cannot always team together NICs from multiple vendors. Many of the solutions do not integrate well with other networking features of Windows Server or with features such as Hyper-V. Finally, each of these NIC teaming solutions is managed differently, and most cannot be managed remotely. As a result, it is not easy for an administrator to move from machine to machine in a heterogeneous environment and know how to configure NIC teaming on each host.
NIC Teaming in Windows Server 2012
Windows Server 2012 includes an integrated NIC Teaming solution that is easy to setup and manage, is vendor independent, and that supports the performance optimizations provided by the underlying NICs.
NIC Teaming is easily managed through PowerShell or a powerful, intuitive UI (the UI is layered on top of PowerShell). Teams can be created, configured, monitored, and deleted at the click of a mouse. Multiple servers can be managed at the same time from the same UI. Through the power of PowerShell remote management the NIC Teaming UI can be run on Windows 8 clients to remotely manage servers even when those servers are running Windows 2012 Server Core!
A team can include NICs from any vendor – and it can even include NICs from multiple vendors. This vendor-agnostic approach brings a common management model to even the most heterogeneous datacenter. New NICs can be added to systems as needed and effortlessly integrated to the existing NIC Teaming configuration.
Finally, the team supports all the networking features that the underlying NICs support, so you don’t lose important performance functionality implemented by the NIC hardware. The “no compromise” approach means that NIC Teaming can be deployed with confidence on all servers.
NIC Teaming Configuration Options
NIC Teaming in Windows Server 2012 supports two configurations that meet the needs of most datacenter administrators.
Customers who value route diversity (in order to withstand switch failures) can connect their hosts to different switches. In this “Switch Independent Mode,” the switches are not aware that different interfaces on the server comprise a team. Instead, all the teaming logic is done exclusively on the server. Many customers choose to operate in an active/active mode, where traffic is spread across both NICs until a failure occurs. Historically some customers prefer to operate in an active/standby mode where all the traffic is on one team member until a failure occurs. When a failure is detected all the traffic moves to the standby team member. This mode of operation also can be created in Switch Independent mode teaming. The real tradeoff is what happens when there is a failure. If you’ve configured active/standby then you will have the same level of performance in a failure condition whereas you’ll have degraded performace if you go with the active/active mode. On the other hand, when you don’t have a failure, you’ll have much greater bandwidth using active/active.
In addition to achieving reliability, customers can also choose to aggregate bandwidth to a single external switch using NIC Teaming. This is done by creating a team in a “Switch Dependent Mode” wherein all NICs that comprise the team are connected to the same switch. There are two common varieties of Switch Dependent teams: Those that use no configuration protocol, a method often called static or generic teaming, and a mode that uses the IEEE 802.1ax Link Aggregation Control Protocol (LACP) to coordinate between the host and the switch. Both of these models are fully supported in Windows Server 2012. For more details on the modes of operation and load distribution schemes, please refer to the NIC Teaming User’s Guide.
Switch Dependent Mode treats the members of the team as an aggregated big pipe with a minor restriction (explained below). Each side balances the load between the team members independent of what the other side is doing. And, subject to the minor restriction, the pipe is kept full in both directions.
What is that minor restriction? TCP/IP can recover from missing or out-of-order packets. However, out-of-order packets seriously impact the throughput of the connection. Therefore, teaming solutions make every effort to keep all the packets associated with a single TCP stream on a single NIC so as to minimize the possibility of out-of-order packet delivery. So, if your traffic load comprises of a single TCP stream (such as a Hyper-V live migration), then having four 1Gb/s NICs in an LACP team will still only deliver 1 Gb/s of bandwidth since all the traffic from that live migration will use one NIC in the team. However, if you do several simultaneous live migrations to multiple destinations, resulting in multiple TCP streams, then the streams will be distributed amongst the teamed NICs.
Configuring NIC Teaming in Windows Server 2012
As mentioned previously, NIC Teaming provides a rich PowerShell interface for configuring and managing teams either locally or remotely. Moreover, for those who prefer a UI based management model, the NIC Teaming UI is a complete management solution that runs PowerShell under the covers. Both PowerShell and UI administration are covered in depth in the NIC Teaming User’s Guide. Below are some highlights that show just how easy it is to setup NIC Teaming.
Suppose you have a server with four NICs: NIC1, NIC2, NIC3, and NIC4. In order to put NIC1 and NIC2 in a team, you can run this PowerShell command as an administrator:
New-NetLbfoTeam MyTeam NIC1,NIC2
When the command returns, you will have a team with the name “MyTeam” and team members NIC1 and NIC2, setup in Switch Independent mode. It is also simple to make more advanced changes. For example the PowerShell cmdlet below will create a team as an LACP team to be bound to the Hyper-V switch.
New-NetLbfoTeam MyTeam NIC1,NIC2 –TeamingMode Lacp –LoadBalancingAlgorithm HyperVPorts
As noted earlier, you could use the UI instead to achieve the same results. The NIC Teaming UI can be invoked from Server Manager or by invoking lbfoadmin.exe at a command prompt. The UI is available on Windows Server 2012 configurations that have local UI and on Windows Server 2012 or Windows 8 systems that run the Remote Server Administration Tools (RSAT). The UI can manage multiple servers simultaneously. .
Now you can create a new team. Select the NICs you want to team (control-click each NIC) then right-click the group and click on Add to New Team:
This will bring up the New Team dialog. Enter the team name.
You can configure the team further to support the teaming mode and other properties.
Now the team is set up. It is easy to make changes to the team through the Team TASKS dropdown or by right-clicking on the team.
If you want to modify the team to be an active/standby team, simply right click on the team and select Properties.
This will bring up the Team Properties dialog. Click on the additional properties drop-down, then the Standby adapter drop-down, and select the standby NIC.
After you select OK to apply the change you will see that the NIC is now in Standby mode:
NIC Teaming in Windows Server 2012 enables continuous network availability and increased network performance for all workloads even when the team comprises of NICs from multiple vendors. NIC Teaming can be managed easily using PowerShell or the built-in NIC Teaming UI. NIC Teaming enables greater workload density while reducing operational costs for your private, public, and hybrid cloud deployments.
I have 2 HP StoreEasy with 4 NIC, I configured NIC teaming on 2.
The teaming is OK.
I use another NIC for Backup, i configure this interface without gateway and i add a static route but i don't communicate with my backup network.
Do you have an idee ?
What is the recommendation with failover clustering in WS2012. Should you have a separate network for "private" and "public" traffic? Or create a NIC team and use it for both? Will the Cluster Validation process be clever enough to validate such a configuration?
After setting this, do I have to configure anything on my network switch? I am using a layer 2 1Gbps port network switch.
When one of my network cards included in the team is disabled, the IP of the team doesn't answer anymore.
1 VM, 2 vSwitch, Sw indep, Addr Hash, 2 NIC, active/passive.
-Active is disabled, the standby become active and team is OK.
-The card disabled is re-enabled, team is OK
-Standby is disabled, team is KO.
-The card disabled is re-enabled, team is OK
The Primary member (Mac of the Team) never change.
In this configuration, I have a single point of failure.
Could someone help me?
Our objective is ISP redundancy. We have 2 modem routers. Each is connected to a different ISP. We have 2 NICs in the server. Each is connected to a modem router.When ISP1 goes down, we want all traffic to go through ISP2. I understand that designating the ISP1 NIC as primary and the ISP2 NIC as standby would work just fine. However, would leaving all adapters active be a bad thing or a good thing in this scenario?I am thinking it would be nice to use both adapters to increase bandwidth rather than let ISP2 just sit there doing nothing unless/until ISP1 fails.