Cloud Insights from Brad Anderson, Corporate Vice President, Enterprise Client & Mobility
In the previous four technical posts I’ve examined Virtualization & Templates, VMRoles & PaaS, Azure Active Directory, and Windows Azure Pack – now I want to look at the inner workings of the connections between clouds. A Hybrid Cloud spans your cloud resources (across public, private, hosted), and linking these otherwise-siloed clouds is the fundamental action of a Hybrid environment. This connection is enormously complex, and Microsoft has tackled this challenge for you by developing the networking that connects your clouds.
In this post I’ll look at Hybrid Networking from two angles: The hoster and the tenant. I’ll also explain the way that these clouds are connected, in particular what we deliver with the Microsoft Cloud OS vision provided by Windows Server 2012 R2, System Center 2012 R2, and Windows Azure Pack (full list here).
With Windows Server 2012 and System Center 2012 SP1 we built the foundation of Hybrid Networking with Hyper-V Network Virtualization (HNV). HNV isolates tenants without the need for VLANs and enables tenants using the same IP Subnet ranges. This saves hosters from the burden of dealing with VLAN limitations (8096 VLANs), and it also delivers a modern self-service experience to the tenants so that they can bring their own Network (IP Range) to the Service Provider.
All of this functionality is maintained in the 2012 R2 releases, and with Windows Server 2012 R2 we deliver a multi-tenancy NVGRE Gateway that allows Site-to-site VPN connectivity, NAT, and direct routing. The gateway is fully managed by Virtual Machine Manager 2012 R2 and the self-service experience is delivered by Windows Azure Pack.
I explored the topic of Software-defined Networking in depth back in October – you can check it out here.
With this intro in mind, let’s get technical. I’ll start with the hoster.
A Hoster (for the purposes of this post, I’ll refer to them by something original like, Contoso) is using Windows Server 2012 R2 and System Center 2012 R2, and they have also installed Windows Azure Pack.
To allow tenants to bring their own network (BYON), Contoso is also ready to deploy a HA NVGRE Gateway Service Template for VMM. This service template removes the burden of manually setting up a guest cluster and then configuring RRAS. The template can be downloaded via the web platform installer by using this custom feed link. After filling out the required information, Contoso starts deploying the service template.
Note:You can find more details on this process and how to design and build a virtualized network infrastructure here. This virtualized network will allow service providers to offer their customers the ability to extend their networks to the service provider. There’s additional info about this in the “More Info” section below.
As soon as the gateway deployment is finished, Contoso only needs to add the deployed HA NVGRE Gateway as a Network Service in VMM. Now the service is ready for their customers to use. To add the network service, simply use the wizard (available within VMM) to guide you through the process. The only tricky part is the connection string.
Regarding that connection string, here are two examples for reference:
To be clear on the needs of your organization, I recommend reviewing each of the “Software Defined Networking” posts noted in the “More Info” section below – especially Part 3.
For the final step to complete the Hoster configuration, Contoso configures the connectivity of the Network Service that represents the NVGRE Gateway. To do this, they need to select the front-end network that represents the external network with the public routable IPV4 Address. In addition, they also need to configure the backend network that maps to the “Network Virtualization” Network.
Note:You can have multiple Network Services pointing to multiple NVGRE Gateways registered in VMM. This ensures the solution will scale because VMM distributes the connections across multiple network services (NVGRE Gateways). The HA NVGRE Gateway itself is active/passive and supports up to 200 NAT connections, or 50 VPN Connections. When planning, keep in mind that for each tenant that enables NAT you need one public routable Ipv4 Address. For network performance throughput, I recommend network cards that support NVGRE offload.
In this scenario, Contoso has a new tenant called “Fabrikam” that has signed up for their IaaS hosting as part of their goal to build a Hybrid Cloud and run multiple services in the Contoso Cloud. This IaaS hosting enables Fabrikam to pay on a usage-based model and also scale as needed without investments in their own datacenter. Looking ahead, Fabrikam knew that it didn’t want to be locked into its cloud decision, and they wanted to have the flexibility to move assets to Windows Azure or any other cloud partner when required.
The Contoso Cloud portal experience looks very familiar to Fabrikam’s admins because they already use Azure as a development platform. This helps the admins navigate the portal and use Azure PowerShell for consistency.
Fabrikam begins by creating a virtual network. Early on, they notice they can choose their own IP Subnet when creating the virtual network:
Fabrikam now needs to configure a gateway for their virtual network. This is done by simply checking a box (see image below). The Contoso Cloud Portal will call the WAP Tenant API again, and the WAP Tenant API then calls SPF again, and SPF enables the Gateway in VMM for the Fabrikam VM Network.
Fabrikam’s last step is to configure site-to-site connectivity between the Contoso Cloud and their on-prem network. Fabrikam has done this before with Windows Azure, and the similarities here with WAP are unmistakable.
After downloading the VPN configuration script (using the link provided by the hoster), the Network Engineer at Fabrikam can quickly establish the site-to-site tunnel. The Network Address Translation feature is also used by Fabrikam for outbound internet access and publishing web-based services.
Note: The Multi-Tenancy Gateway in Windows Server 2012 R2 only supports IKEv2 for VPN connections.
This is extremely important piece of the puzzle that you solved! To make it working for really big customer workloads deployment please make sure you get support soon for multiple Public IP addresses per NVGRE Virtual Network inside WAP so that customers can publish multiple services from same Virtual Network on same port basically in WAP make NAT rule and select which Public IP to use cause now it is only one Public IP per NVGRE Virtual Network.