Software Defined Networking – Hybrid Clouds using Hyper-V Network Virtualization (Part 2)

Software Defined Networking – Hybrid Clouds using Hyper-V Network Virtualization (Part 2)

  • Comments 19
  • Likes

Hello Readers and Viewers!

This is part 2 (of 3) in series for Software Defined Networking – Hybrid Clouds using Hyper-V Network Virtualization. In the previous post we covered all of the major concepts related to Hyper-V Network Virtualization. In this post we will examine simple SDN implementation scenario in detail and illustrate how it can be realized efficiently using Hyper-V Network Virtualization (HNV), Windows Server 2012 R2 and System Center 2012 R2.


Implementing Hyper-V Network Virtualization: Conceptual “Simple” Setup.

Scenario overview:

Network Virtualization described in Part 1 enables service providers such as Fabrikam to deploy cloud hosting entirely in software on top of existing legacy network infrastructure. Fabrikam optimally utilizes their physical infrastructure by deploying VMs of Woodgrove Bank and Contoso Ltd with overlapping IP addresses on the same hosts over the same physical network. Advantage of this approach is Fabrikam is able to deploy the solution entirely in software. Let’s look at business requirements of two organizations Woodgrove Bank, Contoso Ltd and how cloud service provider Fabrikam is able to fulfill their requirements using HNV.

Woodgrove Bank has a two-tier mobile application. The first tier is a web front end to which Woodgrove Bank employees connect. The second tier consists of multiple compute-intensive backend applications servers that service the mobile application.

The number of servers required depends on the number of mobile apps connecting at any given point in time. Given the dynamic nature of the client connection, in a traditional environment, Woodgrove Bank would have to deploy enough servers to cater to peak demand. The problem with this approach is, during non-peak hours the servers deployed remain idle accruing to avoidable capital and operational expenditure (CAPEX & OPEX). Woodgrove Bank deploys the application in Fabrikam infrastructure and avoids (CAPEX & OPEX) of idle servers as they pay only per usage of the servers.

Woodgrove would like to deploy their SQL backend servers in their enterprise premises due to privacy and confidentiality of the data. When the web front-ends are deployed in Fabrikam network, they need to securely access the SQL data in Woodgrove Bank premises.

Let’s have look now at networking requirements of Woodgrove Bank’s deployment in Fabrikam network. Woodgrove Bank’s mobile application should be reachable over Internet. So they need at least one public IP address to which initial connections are established. Woodgrove Bank chooses to assign IP addresses from private IP address range (10.0.0.0/24), for rest of the VMs. Woodgrove Bank VMs running mobile application backend services have to communicate with applications on mobile devices in Internet. Since Woodgrove Bank VMs are assigned private IP addresses (10.0.0.0/24), packets from the VMs cannot be sent directly over Internet. The packets have to be Network Address Translated (NAT) to public IP when they are sent from cloud service provider network.

Connectivity between Woodgrove Bank VMs in Fabrikam network and servers in enterprise premises should be provided by site-to-site (S2S) Virtual Private Network (VPN). S2S VPN allows Woodgrove Bank virtual network in Fabrikam to be seen as an extension of Woodgrove Bank’s on-premises network. This communication is enabled by S2S VPN between Woodgrove Bank edge and Fabrikam network. Fabrikam deploys S2S VPN at their edge to enable Woodgrove Bank to connect to their virtual network.

The same requirement holds good for other tenants like Contoso.  

Fabrikam prefers deploying S2S VPN of multiple tenants on a gateway to bring down the cost of their gateway infrastructure. To simplify route configuration, Fabrikam deploys a single gateway for both NAT and S2S VPN. Details of how S2S VPN & NAT are enabled and deployed are covered in further sections of this blog post.

Details of how this is enabled and deployed through simple conceptual setup are covered in further section of this blog post.

Lab Setup Topology

The setup demonstrates operation of Hyper-V Network Virtualization in a simulated service provider datacenter called Fabrikam using Hyper-V virtualization and System Center 2012 R2 Virtual Machine Manager. Simulated on-premises customer networks for Tenants Contoso and Woodgrove are used to demonstrate access to hosted cloud resources over a simulated Internet connection. The two Tenant networks sharing the same computer names and IP addresses to demonstrate the secure isolation provided by Hyper-V Network Virtualization.

The following figure shows the simple setup and the configurations we will use. We will walk through each of the servers starting with Tenant Host, then Gateway Host and finally to the machines simulating the remote locations.

image

 Setup Pre-requisites

1. Three physical hosts :

    • Each physical host has to be running Windows Server 2012 R2
    • Each host should have a minimum of 8 GB RAM
    • One dedicated physical server running Windows Server 2012 R2 with the Hyper-V role enabled. This dedicated host is used for running the virtual machine used for virtualization gateway. The gateway host must have three network adaptors.
    • One physical server running Windows Server 2012 R2 with Hyper-V role enabled. This server is used for running virtual machines using virtualized networks and SCVMM.
    • One physical server running Windows Server 2012 R2 with Hyper-V role enabled. This server is used to simulate a customer on-premises network infrastructure.

2. An Active Directory Domain with DNS for VMM and fabric resources.

3. A virtual machine running Windows Server 2012 R2 with System Center 2012 R2 – Virtual Machine Manager installed, and hosts are already deployed and managed by VMM in the following host group structure:

SNAGHTML94eff4d

4. Five VM’s with Windows Server 2012 R2 as an operating system. There should be 2 VMs placed on Tenant Host, 1 on the Gateway host and 2 on the host simulating customers on-premises networks.

Get started!

The implementation of Hyper-V Network Virtualization based on the above setup topology requires the following artifacts to be configured on SCVMM:

1. Logical Networks

  • Infrastructure: This is the network SCVMM will use to communicate to the Hyper-V hosts and to the gateway virtual machine.
    Note: If your Infrastructure and Public networks are routable to the same location (ie. If infrastructure can reach the internet), then you must make sure that the gateway metric in the infrastructure IP Pool is greater than the metric in the Public IP Pool
  • Tenant: This is the network that backs the tenant networks. This logical network uses static IP Address Pool and has Network Virtualization Enabled.
  • Public: This is the external network that generally represent the internet. It uses static IP Address Pool so that SCVMM can assign IP addresses for NAT and for the Site-to-Site VPN endpoint.

2. Logical Switches and Port Profiles (Optional)

3. Network Service

  • GATEWAY SERVICE: Microsoft Windows Server Gateway: Hyper-V Network Virtualization Gateway for Site-to-Site (S2S) VPN and NAT.

4. VMNetworks

  • Infrastructure: connected to Infrastructure Logical Network
  • Public: connected to Public Logical Network
  • Contoso: connected to Tenant Logical Network
  • Woodgrove: connected to Tenant Logical Network
Step 1: Defining logical networks with associated IP pools

A logical network, together with one or more associated network sites, is a user-defined named grouping of IP subnets, VLANs, or IP subnet/VLAN pairs that is used to organize and simplify network assignments. In this section we will define logical networks with associated network sites and IP pools in VMM for Infrastructure, Public, and Tenant (NetVirt). Then define VM networks for Infrastructure and Public.

To define the Infrastructure logical network, do the following:

1. In the Fabric pane, expand Networking, and then click Logical Networks. Logical networks represent an abstraction of the underlying physical network infrastructure. By default, when you add a Hyper-V host to VMM management, VMM automatically creates logical networks that match the first DNS suffix label of the connection-specific DNS suffix on each host network adapter.
NOTE: it is recommended to add connection-specific DNS suffixes to the host adapters for logical networks to be easily identified.
2. In the Logical Networks detail pane, right-click the existing logical network, and then click Properties.
3. Change the Name value to Infrastructure. Click Network Site. Under Network sites, click Add.
4. Under Host groups that can use this network site, select All Hosts.
5. Under Associated VLANs and IP subnets, click Insert row. Type 10.1.126.0/24 under IP subnet. Click OK.

To create an IP pool for the Infrastructure logical network, do the following:

1. In the Fabric pane, expand Networking, and then click Logical Networks.
2. Right-click Infrastructure, and then click Create IP Pool.
3. The Create Static IP Address Pool Wizard opens.
4. On the Name page, enter Infrastructure IP Pool as the name. Click Next.
5. In the Network Site page, verify that Use an existing network site is selected, and that Infrastructure_0 is selected with IP subnet 10.1.126.0/24.
6. Click Next.
7. On the IP address range page, change the Starting IP address to 10.1.126.0.10. Change Ending IP address to 10.1.126.20. Click Next.
8. On the Gateway page, click Next.
9. On the DNS page, next to DNS server address, click Insert. Type the address of the local DNS Server.
10. On the WINS page, click Next.
11. On the Summary page, click Finish.

To define the Public logical network, do the following:

1. In the Logical Networks detail pane, right click on Logical Networks to create a new logical network.
2. Assign a name and a description. Make sure you have only selected the ‘One connected network’ option. Click next to proceed. Click Network Site. Under Network sites, click Add.
3. Under Host groups that can use this network site, select All Hosts.
4. Under Associated VLANs and IP subnets, click Insert row. Type 131.107.0.0/24 under IP subnet. Click OK.

To create an IP pool for the Public logical network, do the following:

1. Right-click Public, and then click Create IP Pool.
2. The Create Static IP Address Pool Wizard opens.
3. On the Name page, enter Internet IP Pool as the name. Click Next.
4. In the Network Site page, verify that Use an existing network site is selected, and that Public_0 is selected with IP subnet 131.107.0.0/24.
5. Click Next.
6. On the IP address range page, change the Starting IP address to 131.107.0.2. Change Ending IP address to 131.107.0.5. Click Next.
7. On the Gateway page, click Next.
8. On the DNS page, next to DNS server address, click Insert. Type the address of your public DNS Server. Click Next.
9. On the WINS page, click Next.
10. On the Summary page, click Finish.

To define the Tenant logical network, do the following:

1. Right-click Logical Networks, and then click Create Logical Network. The Create Logical Network wizard launches.
2. Next to Name, type Tenant. Next to Description, type Tenant Networks (NetVirt). Under One connected network, select the checkbox box Allow new VM networks created on this logical network to use network virtualization.


clip_image004
3. Click Next. Under Network sites, click Add.
4. Under Host groups that can use this network site, select All Hosts.
5. Under Associated VLANs and IP subnets, click Insert row. Type 10.1.1.0/24 under IP subnet. Click Next.
NOTE: On the Associated VLANs and IP subnets, you can set a VLAN. This associates the Provider Addresses to a VLAN. You might want to do this if for instance you want all your HNV traffic to be isolated and be on the separate VLAN.
6. On the Summary page, click Finish.

To create an IP pool for the Tenant logical network, do the following:

1. In the Fabric pane, expand Networking, and then click Logical Networks.
2. Right-click Tenant, and then click Create IP Pool.
3. The Create Static IP Address Pool Wizard opens.
4. On the Name page, enter PA IP Pool as the name. Next to Logical network, select Tenant (NetVirt). Click Next.
5. In the Network Site page, verify that Use an existing network site is selected, and that Tenant_0 is selected with IP subnet 10.1.1.0/24.
6. Click Next.
7. On the IP address range page, change the Starting IP address to 10.1.1.1. Change Ending IP address to 10.1.1.10. Click Next.
NOTE: The Provider Addresses distributed by SCVMM PA IP Pool are different from the IP address of the underlying NIC or NIC team of the hosts. Provider Addresses should be routable on the network to and from any other host that has a VM using the same virtual network.
NOTE: SCVMM Provider Address IP Pool distributes 1 IP Address / Routing domain (VM Network)
8. On the Gateway page, click Next.
9. On the DNS page, click Next.
10. On the WINS page, click Next.
11. On the Summary page, click Finish.

After successfully configured the logical networks with associated IP pools, the view pane should be as shown below.


clip_image007

NOTE: Make sure to assign each logical network to each host physical adapter by editing the Properties of each host, selecting Hardware and checking the right logical network for each Network adapter.


Step 2: Implementing Network Virtualization Gateway Services

Windows Server 2012 R2 includes a new inbox network virtualization gateway provider that integrates with System Center 2012 R2 VMM. Configuring Microsoft Software Gateway for Site-to-Site VPN and NAT  functionality consists of the following:

  • Configure the Hyper-V Host HV-EDGE as a dedicated gateway host
  • Deploy and configure the Gateway VM
  • Install the gateway as a VMM Network Service

1. Configure the HV-EDGE server as a dedicated gateway host

1. To do this, right-click the HV-EDGE host and then click Properties. On the Host Access tab, select the This host is a dedicated network virtual gateway check box.

clip_image009

2. Use Windows PowerShell to verify that this is configured. For example:
Get-scvmhost "HVEDGE"| ft -property Name, IsDedicatedToNetworkVirtualizationGateway
NOTE: The above script must contain True in the IsDedicatedToNetworkVirtualizationGateway column for all hosts in the gateway cluster.

2. Deploy and configure the Gateway VM

GatewayVM is a virtual machine configured as a Hyper-V Network Virtualization Gateway for Site-to-Site (S2S) VPN and NAT. GatewayVM requirement consists of the following:

  • Software Requirements
      • Operating System: Windows Server 2012 R2 (Full or Core)
      • Roles: The following roles should be enable on the Gateway VM: Remote Access -> DirectAccess and VPN (RAS), Routing
      • Features: Remote Access Management Tools -> Remote Access GUI and Command-Line Tools, Remote Access module for Windows PowerShell
  • Network Requirements
    GatewayVM must be configured with three virtual network adapters. One adapter is connected to the external virtual switch on HV-EDGE used to simulate an Internet connection, and the other two adapters are connected to the external virtual switch on HV-EDGE used to simulate a service provider datacenter connection.
  • Firewall Requirements
    SCVMM is using WinRM to manage the Virtualization gateway (WINRM traffic is allowed by default on Windows Firewall).


You will now deploy a gateway service to HV-EDGE host. Begin this process with the Add Network Service wizard.

1. In the Fabric workspace, click Add Resources, and then click Network Service to start the Add Network Service wizard.
2. For the manufacturer select Microsoft and model select Windows Server Gateway.
3. Select a RunAs account that has administrative credentials on the gateway.
4. Enter a connection string. hen multiple parameters are combined the connection string will look similar to this (for S2S/NAT):
VMHost=HV-Edge.contoso.com;GatewayVM=MTGW01.contoso.com;BackEndSwitch=HNVSwitch;
5. No certificates will be returned, you can skip the Certificates panel.
6. We recommend that you test the connection before you close the wizard.
7. Select Compute, Management and Edge host groups for the scope.


clip_image011
8. Complete the wizard and wait for the Add Network Service device job to complete successfully.

Now you have to Connect the front-end and back-end adapters. You must tell VMM which network site to use for the front-end and back-end adapters since VMM needs to be able to allocate IP addresses from those sites and know which VM networks can use this gateway.

1. In the Fabric workspace, expand Networking, and then click Network Service. Select the gateway that you just added.
2. Open the Properties.
3. In the GatewayService Properties dialog box, select the Enable front end connection check box. In the Front end network adapter list, click Public_vNic
4. In the Front end network site list, click the Public.
5. Select the Enable back end connection check box.
6. In the Back end network adapter list, click Tenant_vNic
7. In the Back end network site list, click the Tenant


clip_image013

The gateway is now ready to use. We have configured our virtualization gateway and is ready to use network virtualization with routing capabilities.

Step 3: Creating Tenant VM Networks

To create VM networks for the Contoso hosted resources on the Tenant logical network

1. Open the VMs and Services workspace.
2. In the VMs and Services pane, click VM Networks.
3. On the Home tab, in the Create group, click Create VM Network.
4. The Create VM Network Wizard opens.
5. On the Name page, enter Contoso VM Network, and then in the Logical network list, select Tenant. Click Next.
6. On the Isolation page, select Isolate using Hyper-V network virtualization, and then click Next.
7. On the VM Subnets page, click Add, enter Contoso VM Subnet as the name for the IP subnet and specify the subnet by using CIDR notation 10.0.0.0/24. Click Next.
8. On the Connectivity page, select Connect to another network through a VPN tunnel, and select Connect directly to an additional logical network using Network address translation (NAT). Verify that GatewayService is selected as the Gateway device, and then click Next.

 image  
9. On the VPN Connections page, next to Subnet, type 10.254.254.0/29. Under Specify VPN connections, click Add.
NOTE: This is the gateway subnet of the tenant compartment on HNV gateway VM. Contoso administrator should ensure that this subnet does not overlap with any other IP subnet in any of the sites of Contoso. The VSID interface in the tenant compartment is assigned the IP address from the second IP of the subnet.

10. Next to Name, type Contoso VPN Connection.
11. Next to Remote endpoint, type 131.107.0.100. This has to be to external (Public) IP address of the remote S2S gateway for the enterprise Contoso.
12. Click Authentication. Select Authenticate using the following credentials, and then click Browse.
13. In the Select a Run As account dialog, click Create Run As Account.
14. In the Create Run As Account dialog, Next to Name, type Contoso User1 Account. Next to User name, type User1@contoso.com, and then type and confirm the password for User1. Clear the checkbox for validate domain credentials, and then click OK.
15. In the Select a Run As account dialog, verify that Contoso User1 Account is selected, and click OK.
16. Click Routes, and then click Add. Type 100.100.100.0/24 under subnet which is the subnet of Contoso premises network, and then click Next.

image  
17. Click Network address translation (NAT) screen. Specify NAT and eventually NAT rules. By assigning NAT, the VMs will automatically be granted internet connectivity using the virtualization gateway. Also to be able to deploy a VM within this VM network that should serve a service (In this scenario the VM will be hosting Mobile Apps Web Service) to the public, then you have to create NAT rules that will point to the virtual machine’s IP address, which is completely virtualized using HNV. The IP pool is the pool we associated with the logical network called ‘Front-end’ earlier. VMM can pick an address from the pool for you, or you can manually type it in. Click next once you are done.

SNAGHTML9aa2d96
18. On the Summary page, click Finish.
19. The Jobs dialog box appears to show the job status. Make sure that the job has a status of Completed, and then close the dialog box.

Create IP Pools for the VM Networks

1. Open the VMs and Services workspace.
2. On the Home tab, in the Show group, click VM Networks.
3. Right-click Contoso VM Network, and then click Create IP Pool.
4. The Create IP Pool Wizard opens.
5. In Name, type Contoso IP Pool. Select Contoso VM Network and Contoso VM Subnet (10.0.0.0/24). Click Next.
6. On the IP address range page, change Starting IP address to 10.0.0.2, change Ending IP address to 10.0.0.10, and then click Next.
7. On the Gateway page, click Next.
8. On the DNS page, click Next.
9. On the WINS server page, click Next.
10. On the Summary page, click Finish.
11. The Jobs dialog box appears to show the job status. Make sure that the job has a status of Completed, and then close the dialog box.
12. Verify that the VM networks Contoso VM Network appears in the VM Networks and IP Pool pane with associated IP pools of 10.0.1.0/24.

Create VM Network and IP Pool for Tenant Woodgrove by repeating the previous process.
After successfully configured Contoso and Woodgrove VM Networks with associated IP pools, the view pane should be as shown below.

image

Step 4: Testing Hyper-V Network Virtualization setup:

There are 4 steps to testing Hyper-V Network Virtualization, cross-premises VPN and NAT connectivity.

1. Deploy tenant virtual machines to a VMM Clouds to leverage Hyper-V Network Virtualization.
2. Establish site-to-site VPN connections between the simulated customer on-premises environments running on ONPREM-HOST and the Network Virtualization Gateway running on HV-EDGE.
3. Test connectivity from customer networks to cloud hosted resources over S2S VPN gateway.
4. Test gateway NAT functionality

1. Deploying Tenant Virtual Machines

In this step, you will deploy customer tenant VMs in the simulated datacenter environment. Contoso-VM01 and Woodgrove-VM01 will be deployed on Fabrikam_Cloud, where they will have a single connection to their respective tenant VM networks. Although the VMs will have the same names and IP addresses, they will be securely isolated from each other while maintaining access to their respective on-premises domain environments over the Internet. To deploy the Tenant virtual machines on Fabrikam Cloud, do the following:

1. In Virtual Machine Manager Console, open the VMs and Services workspace.
2. On the Home tab, in the Create group, click Create Virtual Machine. The Create Virtual Machine Wizard will launch.
3. On the Select Source screen, click Browse.
4. Double-click the VHD file WS2012R2BaseOS.vhdx to select it, and then click Next.
5. Type Contoso-VM01 for virtual machine name, and then click Next.
6. On the Configure hardware screen, provide settings for memory, and then select the Network Adapter 1 setting in the console tree.
7. Under Connectivity, select Connected to a VM network, and then click Browse.
8. Select Contoso, and then click OK.
9. Next to VM subnet, select Contoso Subnet.
10. Click Next.
11. On the Select Destination screen, select Deploy Virtual Machine to Private Cloud, and then click Next.
12. Review the options on the Add properties screen and adjust settings as desired, then click Next.
13. On the Summary page, click Create.
14. The Jobs dialog box appears. Make sure that the job has a status of Completed, and then close the dialog box.
15. Verify that Contoso-VM01 is now running on Fabrikam_Cloud.
16. Connect to the Contoso-VM01 virtual machine.
17. Complete the mini-setup process when starting the VM for the first time, and then sign in as the local Administrator.
18. From the Server Manager console Tools menu, click Windows PowerShell.
19. In the Windows PowerShell window, type ipconfig /all to display the Windows IP configuration. Note that the address 10.0.0.2 was assigned automatically by the SCVMM DHCP Server component. Note that the first IP address in the VM Subnet range, 10.0.0.1, was automatically assigned by VMM as the default gateway.
20. It is now possible to ping the VMM-assigned default gateway in order to validate connectivity. Type ping 10.0.0.1 and press ENTER to test the virtual gateway connection. You should receive four replies from 10.0.0.1.

Create Woodgrove-VM01 for Tenant Woodgrove by repeating the previous process.
After successfully deployed Contoso and Woodgrove VMs, the view pane should be as shown below.

image

2. Deploying Enterprise Gateway on customers premises

In this step, you will install and configure RRAS on the Enterprise Gateway Virtual Machines for both Contoso and Woodgrove. These VMs will be used to establish cross-premise VPN connections to make hosted cloud resources available to the on-premises customer corpnet environments.

NOTE: We assume that VMs have been deployed in the remote locations. Each VM must be configured with 2 virtual network adapters. One adapter is connected to the external virtual switch on ONPREM-HOST used to simulate an Internet connection, and the other adapter is connected to the external virtual switch on ONPREM-HOST used to simulate customer’s premises connection.

Configuration in this step consists of the following:

  • Install RRAS on ContosoEntGW and create a site-to-site VPN connection to MTGatewayVM running on HV-EDGE
  • Install RRAS on WoodgroveEntGW and create a site-to-site VPN connection to MTGatewayVM running on HV-EDGE
  • View and initialize the site-to-site VPN connections on MTGatewayVM
Install RRAS on ContosoEntGW and create a site-to-site VPN connection to MTGatewayVM

1. Connect to the ContosoEntGW virtual machine. On Server Manager Dashboard screen, under Configure this local server, click Add roles and features.
2. On the Select Server Roles page, select Remote Access and then click Next.
3. On the Features selection screen, click Next.
4. On the Remote Access screen, click Next.
5. On the Role Services selection screen, click to select the DirectAccess and VPN (RAS) and the Routing role services. Click Add Features when prompted, and then click Next.
6. Click Next twice to accept the default settings for Web Server Role and Role Services, and then click Install.
7.
Verify that the installation was successful, and then click Close.

To establish a site-to-site VPN connection between ContosoEntGW and MTGatewayVM

1. On the ContosoEntGW, click Tools in Server Manager, and then click Routing and Remote Access.
2. In Routing and Remote Access, right-click ContosoEntGW (local) in the console tree, and then click Configure and Enable Routing and Remote Access.
3. The Routing and Remote Access Server Setup Wizard appears. Click Next.
4. On the Configuration page, select Secure connection between two private networks. Connect this network to a remote network such as a branch office, and then click Next.
5. On the Demand-Dial Connections page, verify that Yes is selected, and then click Next.
6. On the IP Address Assignment page, select Automatically. Click Next.
7. Click Finish.
8. The Demand-Dial Interface Wizard will start. Click Next.
9. On the Interface Name page, type MTGatewayVM. Click Next.
10. On the Connection Type page, select Connect using virtual private networking (VPN). Click Next.
11. On the VPN Type page, select IKEv2. Click Next.
12. On the Destination Address page, type 131.107.0.1 which is the IP of the external network interface of the MTGatewayVM, and then click Next.
13. On the Protocols and Security page, select Route IP packets on this interface. Click Next.
14. On the Static Routes for Remote Networks page, click Add. In Destination, type 10.0.0.0. In Network Mask, type 255.255.255.0. In Metric, type 1. Click OK, and then click Next.
15. On the Dial-Out Credentials page, click Next.
16. On the Completing the Demand-Dial Interface Wizard page, click Finish.
17. In the Routing and Remote Access console, expand ContosoEntGW (local), and then click Network Interfaces.
18. Right-click the MTGatewayVM demand dial interface listed in the details pane, and then click Properties.
19. Select the Security tab, and then under Authentication, select Use preshared key for authentication. Type your administrator password next to Key (this is the password used for the Tenant Run As Account used for Authentication).
20. Click OK to close the MTGatewayVM Properties window
21. Right click MTGatewayVM connection, click Connect.

Install and configure RRAS on WGEntGW for customer Woodgrove by repeating the previous process.

 
3. Verify network connectivity for the Tenants virtual machines:

The Contoso-VM01 and Woodgrove-VM01 virtual machines are both hosted on the TENANT-HOST server. Although they share the same IP address, they are securely isolated from one another in the datacenter using network virtualization. To verify that these virtual machines have network connectivity to remote resources in their respective customer on-premises environments over the Internet through the multitenant S2S gateway running on HV-EDGE, run the following steps

1. Windows Server 2012 R2 includes a new network utility Windows PowerShell cmdlet named Test-NetConnection. Type Test-NetConnection 100.100.100.2 -TraceRoute -InformationLevel Detailed and press ENTER to verify connectivity to the internal interface on ContosoEntGW through the datacenter physical network, the virtualization gateway, and the cross-premises VPN connection over the Internet. The results of the Ping/ICMP test should indicate that the test succeeded to 100.100.100.2.

image

2. Windows Server 2012 R2 includes also support for a new Windows PowerShell cmdlet, Test-VMNetworkAdapter, to give users a scriptable way to troubleshoot VM connectivity quickly. Test-VMNetworkAdapter is also known as “CA ping”. It runs on a Hyper-V host, and works for both HNV networks and non-HNV (i.e. VLAN) based networks. Datacenter administrators can use this cmdlet to verify connectivity for tenant VMs without having access to the actual VM. Type the following command to test connectivity from Contoso-VM01 through the Gateway VM, over the Contoso S2S VPN tunnel, and to the internal interface on ContosoEntGW.

clip_image021

3. When typing IPConfig on the HNV gateway VM, you should have several IPs from the front-end network assigned to the front-end adapter. Every time you create a VM network and configure connectivity with NAT, an IP will be assigned to that network, and implemented on the gateway VM on the front-end network

image

4. We did also enabled NAT connectivity on each tenant VMNetwork, and used a public DNS for name resolution. We can verify that we have internet connectivity by typing nslookup and lookup any public domain name from Tenants VMs

Behind the scene: What exactly is being applied as configuration from SCVMM

While everything in the previous setup was configured and deployed using Virtual Machine Manager, we have found that many of the concepts that we are talking about are best understood when going through the PowerShell APIs to check for Hyper-V Network Virtualization settings. In this section we will examine how did SCVMM configured Policy records and HNV gateway behind the scene, based on settings provided by Fabrikam Administrator.

1. Examining HNV Policy records on Hyper-V Hosts:

The best way to understand the policy records is to go through the PowerShell APIs that allow setting of the policy records. There are four APIs that we need to look at. Each API has a New, Get, Set and Remove command but for this looking at the Get command is most interesting.

Get-NetVirtualizationLookupRecord: This Cmdlet gets a record that maps a Customer Address to a Provider Address: it returns lookup record policy entry for an IP address that belongs to a VM Network. Computers can exchange network traffic with a virtual machine (VM) by using a Customer Address within the virtual network. Network Virtualization manages the Provider Addresses that are the physical network addresses. Running Get-NetVirtualizationLookupRecord on Tenant-Host (or HV-EDGE) returns the following records :

This record configures the lookup record for Woodgrove-VM01 located on Tenant-Host 

image  
NOTE: Note the value of VirtualSubnetID assigned to Woodgrove-VM01 by VMM. This VSID is what differentiates it from the same Customer Address in use by Contoso-VM01.


This record configures the lookup record for Contoso-VM01 located on Tenant-Host

clip_image027

NOTE: Contoso-VM01 and Woodgrove-VM01 virtual machines have the same CustomerAddress value of 10.0.0.2. The common Customer Addresses are isolated from one another by means of their unique CustomerID and VirtualSubnetID values. There are two Provider Addresses, one for each tenant, automatically assigned by VMM in the 10.1.1.1-10 IP address range. These addresses were assigned by the DHCP extension running on Tenant-Host from the PA IP Pool. Note that each tenant network also has a virtualized instance of a gateway at the Customer Address of 10.0.0.1.

This record is the gateway Customer Address (10.0.0.1) lookup record that corresponds to the same VSID assigned to Woodgrove-VM01. This is the gateway interface assigned to the Woodgrove-VM01 virtual machine.

clip_image029

NOTE: This is a new feature in Windows Server 2012 R2. This allows VMs on a virtualized network to ping the default HNV gateway. This is a useful diagnostic mechanism to ensure based connectivity on a virtualized network.

NOTE: The default gateway for a subnet is fixed at the .1 address for that subnet.

This records configure the lookup record for MTGatewayVM located on the Gateway host

clip_image031

NOTE: SCVMM assigns 1 Provider address per routing domain (or VMNetwork), and 1 Provider address per gateway.

Get-NetVirtualizationCustomerRoute: This Cmdlet returns the list of customer routes on each host. HNV uses Customer Routes to manage network traffic on a VMNetwork. Running Get-NetVirtualizationCustomerRoute on Tenant-Host (or HV-EDGE) returns the following records (Here we are only covering Woodgrove’s customer routes. In your setup you should get similar records the other VMNetworks configured on your environment):

This records configures the customer route for Woodgrove subnet 10.0.0.0/24. This is the IP Address range for the virtual subnet 16736361.

image  

NOTE: This is where a VSID is assigned to a Routing Domain (called a VM Network in SCVMM). Unlike a VSID which is explicitly carried in the packet the routing domain is a logical concept that is enforced in the HNV module.

NOTE: This is where a VSID is assigned an IP address subnet. A virtual subnet supports all valid sized subnets so it is not required to be a /24 but can range from a /24 to a /30.

NOTE: As described previously HNV does all routing inside a routing domain. This means the next hop is “onlink” and HNV will handle all the routing.

 This records configures the customer route for the Woodgrove gateway’s 10.254.254.0/29 subnet. This is the IP Address range for the virtual subnet 6705397. It is also the customer route for the HNV Gateway.

clip_image035

NOTE: The gateway must be on its own VSID. This is because policy (as seen in the next step) is configured to send all traffic outside the virtual network through the gateway as the next hop. This limits other VMs being on the same VSID as the gateway.

NOTE: The gateway and any VSID that has to go through the particular gateway must be on the same routing domain.

Examining Routing compartments on the HNV Gateway:

Now let’s examine how packets enter routing compartment within the Gateway from Tenant virtual network. The gateway runs as a VM and the host’s physical network interface card is connected to the cloud service provider network. So NVGRE packets of all tenants hit the physical interface of the physical host, and they are transmitted through the correspondent virtual interface to specific tenant compartment of the gateway VM by v-switch connected to the physical Network interface. Running the following 2 cmdlets on the gateway host helps to understand how the above has been implemented by SCVMM:

Get-VmNetworkAdapterIsolation -VMName "MTGatewayVM"

This setting enable traffic isolation on the physical network interface on the Gateway host

clip_image037

Get-VMNetworkAdapterRoutingDomainMapping

The settings below ensure that that all HNV packets with specific tenant routing domain are transmitted to correspondent VSID interface on GatewayVM.

clip_image039

After the above records has been implemented successfully on Gateway Host, compartment for each tenant is created. Now let’s take a look from inside the gateway VM. From an elevated PowerShell command on the gateway, run the below cmdlet:

Get-NetCompartment

You’ll find each tenant has its own compartment.

clip_image041

Once compartments created, the VSID interface in the each tenant compartment has been assigned an IP address. Based on the “gateway subnet” value specified by Tenant administrator, IP address is assigned by SC-VMM to VSID interface. This can be examined by using the following cmdlet:

ipconfig /allcompartments 

clip_image043

Before creating S2S interfaces, Routing needs to be enabled in the compartment. That can be verified by running the following cmdlet:

Get-RemoteAccessRoutingDomain
clip_image045

Type the following command and press ENTER to display the VPN S2S interfaces configured by VMM as part of the tenant VM Network creation steps.

Get-VpnS2SInterface | fl

image

NOTE: There are two VPN interfaces created, one for the Contoso Routing Domain, and one for the Woodgrove Routing Domain. The packets sent and received over these VPN interfaces are securely isolated within their respective network routing compartments.

That’s it for Now, What’s Next!

In this blog post, we covered step-by-step guide on how to implement our SDN solution using Hyper-V Network Virtualization (HNV), Windows Server 2012 R2 and System Center 2012 R2. in the next and final part of this blog series, we will see how the technologies introduced in Windows Server 2012 R2 such as Hyper-V Network Virtualization (HNV), Hyper-V replica (HVR) and Multi-tenant TCP/IP stack enable cloud service providers like Fabrikam to provide at scale disaster recovery as a service to enterprise and small and medium business.

Thank you for sticking with the entire walkthrough,  I know it is was lengthy but details count.

 

Till next time, Happy “Networking”! clip_image010

Your comment has been posted.   Close
Thank you, your comment requires moderation so it may take a while to appear.   Close
Leave a Comment
  • Was "ping -p" syntax changed from Preview to RTM

  • Thanks Stanislav for the comment!

    everything should be the same, there was no change. Are you seeing differences or some issues ?

  • I've spotted some difference. Previously in this post http://blogs.technet.com/b/networking/archive/2013/07/31/new-networking-diagnostics-with-powershell-in-windows-server-r2.aspx I see that there are two IP addresses used while in RTM I can use only one IP address: http://cloudadministrator.wordpress.com/2013/11/25/quick-tip-pinging-provider-address-in-hyper-v-network-virtualization/

  • VMHost=HV-Edge.contoso.com;GatewayVM=MTGW01.contoso.com;BackEndSwitch=HNVSwitch; You don't explain if the Gateway VM needs to be part of the domain before you can execute the connection string?

  • @Alex - Thanks for the feedback. It is actually unrelated, and depends specifically on your fabric design and security requirements. Gateway VMs could be member of Fabric domain, or you can have dedicated domain on untrusted zone (DMZ) where you deploy your edge services. When you add the Network Service on SCVMM, please make sure you provide a RAA that has Admin privileges on the VMs.

  • @Nader - Thanks! I know it is not easy to write such a detailed article! Anyway, I concur that it is "unrelated". Your step by step guide explains how you can implement a "new" cloud. It doesn't really focus on "already" implemented environment. For example, I have a few VMs running from multiple computers on a Hyper-V Host with IP Addresses, etc. I really don't want to create Tenant IP Pool. -Do you still need to create IP Pools (Tenant) for existing VMs? -How Lookup Records are created for VMs which have not been implemented via SCVMM? -How necessary GW (GatewayWildcard) records are published on Hyper-V Host? Most importantly is to know "when" these settings are pushed to Hyper-V Host (e.g. creating customer routes, lookup tables, PA addresses, etc). Thanks!

  • How do you enable HNV for existing virtual machines? I don't see any options on the property page of VM via SCVMM. Thanks!

  • I got lost somewhere in this HNV - how does the tenant publish their services from their isolated networks? I understand that each VMNetwork can be mapped only to 1 external IP(many-to-one NAT). If a tenant has multiple different services to publish to internet(citrix farm, web applications, other services), how does he survive with just 1 IP? I could create multiple VMnetworks for a single tenant of course, but tenant would have to connect those networks inside his bubble with his own routing solution. Then again, I can make tenant networks routable to PA space but that would introduce issues with overlapping CA and the design is a security issue itself in general. Please advise.

  • Nader, I can't seem to find Part 3 - was it ever posted?