This post is a part of the nine-part “What’s New in Windows Server & System Center 2012 R2” series that is featured on Brad Anderson’s In the Cloud blog.  Today’s blog post covers new features in Hyper-V Network Virtualization in Windows Server 2012 R2 and how it applies to the larger topic of “Transform the Datacenter.”  To read that post and see the other technologies discussed, read today’s post: “What’s New in 2012 R2: IaaS Innovations.”

This blog provides a high-level overview of Hyper-V Network Virtualization and details the new functionality available in Hyper-V Network Virtualization in Windows Server 2012 R2 and System Center 2012 R2 Virtual Machine Manager. If you are new to Hyper-V Network Virtualization, you should check out the technical details documentation.


The Transforming your datacenter – networking post does a great job of showing where Hyper-V Networking Virtualization (HNV) fits in Microsoft’s overall Software-defined Networking (SDN) solution. Fundamentally, HNV provides a virtual network abstraction on top of a physical network. This abstraction gives VMs the illusion it is running on a physical network while it is really running on a virtual network. This is similar to the abstraction hypervisors provide to the operating system running in virtual machines. Hyper-V network virtualization provides this abstraction through an overlay network on top of the physical network for each VM network. While this abstraction is great, the real question is what value does this provide to customers. Figure 1 shows the benefits HNV provides for different target audiences.


Figure 1 HNV benefits for different audiences.

In the R2 release the components that comprise our network virtualization solution includes:

  1. Windows Azure Pack providing a Tenant facing portal for creating virtual networks
  2. System Center Virtual Machine Manager (SCVMM) providing centralized management of the virtual networks.
  3. Hyper-V Network Virtualization providing the data plane needed to virtualize network traffic.
  4. Hyper-V Network Virtualization gateways providing connections between virtual and physical networks.

New Hyper-V Network Virtualization Features in R2

While the remainder of this post will really dig into the new HNV platform capabilities, I wanted to provide a quick overview of all the features we added:

  • Inbox HNV Gateway
    A multi-tenant inbox gateway that performs Site-to-Site (VPN), NAT and Forwarding functions. This includes SCVMM being able fully manage this HNV gateway.
  • Improved HNV interoperability with Hyper-V Virtual Switch Extensions
    This allows switch extensions to work in both the Customer Address (CA) and Provider Address (PA) space. In addition, it allows a new hybrid-forwarding mode that allows third party network virtualization solutions to co-exist on the same Hyper-V host with HNV. For more details, check out the Hyper-V Extensible Switch Enhancements in Windows Server 2012 R2 post.
  • Enhanced Diagnostics of HNV VM Networks
    R2 includes a number of new diagnostic tools that enhance a customer’s ability to diagnose HNV networks. In R2, we enhanced ping.exe to allow pinging provider addresses and shipped two new PowerShell cmdlets (Test-VMNetworkAdapter and Select-NetVirtualizationNextHop) that enables diagnostics of HNV policy and the Customer Address space. In addition, an added the ability for Message Analyzer to decode NVGRE packets is coming. For more details, check out the New Networking Diagnostics with PowerShell in Windows Server R2 post.

HNV Architecture Update

The first feature to talk about is our updated architecture of the HNV data plane running on each Hyper-V host. Figure 2 shows the architectural differences between HNV in Windows Server 2012 and in Windows Server 2012 R2. The basic change was that the HNV filter moved from being an NDIS lightweight filter (LWF) to being part of the Hyper-V virtual switch.

In Windows Server 2012 HNV being an NDIS LWF meant that Hyper-V Switch extensions only worked on the customer address space. For capture and filter extensions this meant they were not aware of the underlying physical networking being used for HNV packets. For forwarding switch extensions, HNV being an NDIS LWF meant that they could not co-exist with HNV, so customer had to choose one using HNV or a particular forwarding extension. In R2, we added the ability for switch extensions to work on both the original customer address packet and the encapsulated provider address packet. In addition, forwarding switch extensions can co-exist with HNV allowing multiple network virtualization solutions (one provided by HNV and another provided by the forwarding switch extension) to co-exist on the same Hyper-V host.

Improved interoperability with switch extensions was the primary reason for the change but a nice side effect is that the HNV NDIS LWF does not have to be bound to network adaptors anymore. Once you attach a network adaptor to the virtual switch you can enable HNV simply by assigning a Virtual Subnet ID to a particular virtual network adaptor. For those using SCVMM to manage your VM networks this is transparent but anyone using PowerShell this will save an often-missed step.


Figure 2 HNV Architectural Update in Windows Server 2012 R2

Dynamic IP Address Learning

The next feature to look at is dynamic IP address learning in the customer address space. We learned from customers that it was important to enable highly available services to run in VM networks. In order to do support high availability in virtual networks, we needed to enable clustering of both VMs running in the VM network and clustering of the HNV gateway. In addition to high availability, dynamic IP address learning allows a customer to run DHCP, DNS and AD in a VM network.

We are excited about the new scenarios enabled in HNV but we are just as proud of how we built this feature. One of the key design principles of HNV is to build a highly scalable control plane. We accomplish this by having a centralized policy store (SCVMM) and little to no control traffic on the network. This is still a key design principle of HNV, so to support dynamic IP address learning we did not want to use an inefficient flood and learn approach so we built an efficient implementation.

First for broadcast or multicast packets in a VM network, we will use a PA multicast IP address if configured. However, the typical data center operator does not enable multicast in their environments. As a result, when a PA multicast address is not available we use intelligent PA unicast replication. What this means is that we unicast packets only to PA addresses that are configured for the particular virtual subnet the packet is on. In addition, we only send one unicast packet per host no matter how many relevant VMs are on the host. Finally, once a host learns a new IP address it notifies SCVMM. At this point, the learned IP address becomes part of the centralized policy that SCVMM pushes out. This allows for both rapid dissemination of HNV routing policy and limits the network overhead for disseminating this HNV routing policy.

In addition to this efficient unicast replication, we added support for compliant address resolution semantics. This includes support for Duplicate Address Detection (DAD), Network Unreachability Detection (NUD) and Address Resolution Protocol (ARP) packets in the CA address space for both IPv4 and IPv6. The HNV filter also provides a reliable ARP proxy for any known routing policies once again reducing the amount of control traffic that goes out on the physical network.

This functionality is fully supported in PowerShell as well as SCVMM so let us look briefly how this would be configured using PowerShell. Figure 3 shows the PowerShell command needed to configure dynamic IP address learning for a particular VM network adapter. There are a couple new things to point out in the New-NetVirtualizationLookupRecord command required to enable IP address learning:

  • You set the CustomerAddress parameter to
  • You set the Type parameter to the “L2only” option

With these two things, HNV will begin learning IP addresses for this this specific VM network adapter. One thing to note is that for each VM network adapter in a host, where you want to turn on dynamic IP address learning, you will need to create this L2 Only LR for its specific MAC address.

PS C:\> New-NetVirtualizationLookupRecord –CustomerAddress –VirtualSubnetID 5000
-MACAddress 020304050607 –ProviderAddress –Type L2Only
–Rule TranslationMethodEncap

Figure 3 The PowerShell cmdlet needed to enable dynamic IP learning for a specific MAC address

Once an IP address is learned for that particular VM, either by it acquiring an address through DHCP or the user setting a static IP address in the VM itself, you will see two (or more) policy entries for that VM. You will have this new L2only lookup record with the customer address and you will have the new dynamic record showing the actual customer address assigned to the VM. If the learned IP address changes over time then you will see multiple dynamic records, one for each learned IP address. Figure 4 shows both the L2only record and Dynamic record.

PS C:\WINDOWS\system32> Get-NetVirtualizationLookupRecord -VirtualSubnetID 5000 
CustomerAddress :
VirtualSubnetID : 5000
MACAddress : 020304050607
ProviderAddress :
CustomerID : {00000000-0000-0000-0000-000000000000}
Context :
Rule : TranslationMethodEncap
VMName :
UseVmMACAddress : False
Type : L2Only

CustomerAddress :
VirtualSubnetID : 5000
MACAddress : 020304050607
ProviderAddress :
CustomerID : {00000000-0000-0000-0000-000000000000}
Context :
Rule : TranslationMethodEncap
VMName :
UseVmMACAddress : False
Type : Dynamic

Figure 4 Lookup records associated with a dynamically learned IP address

HNV + Windows NIC Teaming

The third feature to talk about is the integration between HNV and Windows NIC teaming. In Windows Server 2012, while NIC teaming could provide failover to a VM network it cannot provide load balancing and traffic spreading. For example, a NIC teaming typically will allow a customer to get double the performance for two NICs then they would see when using one but in 2012 with a VM network, they would only be able to get the maximum throughput of one network adaptor. For R2, we enabled both inbound and outbound spread of virtualized traffic on a NIC Team. This means that both traffic leaving a host or coming into a host can utilize all the network adaptors in the NIC team.

To enable outbound spreading there is no addition configuration needed. To enable, inbound spreading you must set a MAC address on the provider address. This allows the switch to spread traffic properly for the NIC team. To get optimal inbound spreading you will need to have one (or more) PAs per NIC in the team. For example, a NIC team of two NICs should have two or more PAs and the CAs spread between them.

In Figure 5, you can see a MAC address set on the provider address using the New- or Set-NetVirtualizationProviderAddress cmdlet.

PS C:\> New-NetVirtualizationProviderAddress –ProviderAddress 
–MACAddress 223344556677 –InterfaceIndex 1 –PrefixLength 24 –VlanId 100

Figure 5 Configuring HNV + NIC teaming

NVGRE Encapsulated Task Offload

The last thing is not a new feature per se because Windows Server 2012 supported NVGRE task offload but we had our first network adaptor partners announce support in their products for NVGRE task offload. Let’s first take a step back on why this is even needed. High-speed network adapters implement a number of offloads (ex. Large Send Offload (LSO), Receive Side Scaling (RSS) and Virtual Machine Queue (VMQ)) that allow full utilization of the network adapter’s throughput. As an example, a computer with a network adapter capable of 10 gbps throughput might only be able to perform at 4 or 5 gbps throughput without particular offloads enabled. In addition, even if it is capable of full throughput, the CPU utilization to perform at maximum throughput will be much higher than with offloads enabled.

For non-virtualized traffic offloads just work. NVGRE, on the other hand, is an encapsulation protocol, which means that the network adapter must access the CA packet to perform the offload. In R2, NVGRE is the only way to virtualize traffic (in Windows Server 2012 IP Rewrite was also supported but not recommended; IP Rewrite has been removed from Windows Server 2012 R2) so NVGRE task offload becomes more important.

We worked with our network adaptor partners on a solution to this performance issues called NVGRE Encapsulated Task Offload. When a network adaptor supports NVGRE Encapsulated Task Offload it ensures that all relevant offloads work with HNV.

At TechEd 2013, two partners announced their next generation network adaptors will support NVGRE Encapsulated Task Offload. You can read the press releases from Mellanox and Emulex for more details.

We are continuing to work with additional NIC Vendors to enable NVGRE Task Offload. Stay tuned for more announcements…


Hyper-V Network Virtualization is a key component in Microsoft’s Software-defined Networking solution and provides key benefits to customers like the ability to bring your own network topology into a cloud environment and ability to live migrate VM across physical network. In R2, we further enhanced HNV with new capabilities like dynamic IP address learning, enabling NIC teaming load balancing with HNV and partners announcing NVGRE Task Offload Enabled network adaptors.

Additional Links

For more details on Hyper-V Network Virtualization and what is new in R2, you can view our TechEd 2013 talk at:

Deep Dive on Hyper-V Network Virtualization in Windows Server 2012 R2

For more details on how SCVMM manages HNV, you can view our TechEd talks at:

How to Design and Configure Networking in Microsoft System Center - Virtual Machine Manager and Hyper-V (Part 1 of 2)

How to Design and Configure Networking in Microsoft System Center - Virtual Machine Manager and Hyper-V (Part 2 of 2)

For more details on Microsoft’s overall SDN solution and how HNV fits, you can view our TechEd talk at:

Everything You Need to Know about the Software Defined Networking Solution from Microsoft

To see all of the posts in this series, check out the What’s New in Windows Server & System Center 2012 R2 archive.


CJ Williams, Principal Program Manager, Windows Core Networking team