Building Cloud Infrastructure with Windows Server 2012 and System Center 2012 SP1

Building Cloud Infrastructure with Windows Server 2012 and System Center 2012 SP1

  • Comments 9
  • Likes

Operating Systems are platforms delivering experiences, features, and APIs that developers can build upon. Today, many developers take already shipping versions of Windows and deliver cloud computing solutions. Windows Server 2012 is a cloud-optimized OS, which means that developers can deliver much better cloud computing solutions with much less effort. System Center 2012 already delivers great cloud computing solutions using Windows Sever 2008/R2. In this blog, Anders Vinberg, a Technical Fellow in our Management Division, describes how the Virtual Machine Manager component in System Center 2012 SP1, now available as a community technology preview, builds on the cloud optimizing features of Windows Server 2012 to take that solution to the next level.

--Cheers! Jeffrey

With the official naming of Windows Server “8” as Windows Server 2012 and the launch of System Center 2012 at MMS a few weeks back, Microsoft has now delivered a solution to our customers for building their private clouds and to hosters for building their own Infrastructure-as-a-Service public cloud offerings. It is instructive to recap the meaning of moving to the cloud model and the core tenets of a cloud as was laid out in the keynote by Brad Anderson at MMS, and then take a look at how this is done with Windows Server 2012 and System Center 2012 SP1.

The Cloud Model
First off, it is important to note that cloud computing does not necessarily mean that the workload is running outside a customer’s premises. The workloads could be deployed on infrastructure that is on a customer’s premises, or on their partners’ premises but completely controlled and managed by the customer. That is a “private” cloud. Workloads could also be deployed and run on a hoster’s premises on shared infrastructure that is used by other tenants. That is a “public” cloud. In both cases, cloud computing is a way of consuming capacity with the attributes of resource pooling, self-service, elasticity and usage-based metering.

 

 

The Cloud Personas
As the cloud model decouples the infrastructure from the services it supports, it also decouples two distinct processes: provisioning and consumption. And there are two corresponding personas:

  • Service provider (the datacenter admin)
  • Service consumer (an application owner)

These two personas look for quite distinct attributes, each in their domain:

 



The separation of concerns between the provider and consumer offers great simplicity and agility. It is a foundation for the trend toward democratization of computing. We often hear that the consumer should not have to be aware of the details of the physical infrastructure, but we can make a stronger statement: the consumer is not allowed to be aware of the physical infrastructure, because that would constrain the daily work of the provider. The provider may need to replace an old machine with a new one that is more efficient, and should not have to involve or even inform the consumer, as long as the abstractions and service level agreements are satisfied. This decoupled model does not fit with all existing IT processes or with all existing apps; in a coming blog we will discuss how Windows Server and System Center accommodate a mix of work styles.

Cloud Attributes Realized
Let’s look at each of the four cloud attributes and see what Windows Server 2012 and System Center 2012 provide customers.

  • Pooled resources: This means that we deal with resources at an aggregate level rather than at the level of individual servers. The cloud exposes a pool of capacity for use by services that require the capacity, and this abstraction decouples the virtualized workloads from the physical infrastructure, allowing dynamic workload placement and independent infrastructure management.
       While modern large-scale clouds often use strictly homogeneous hardware and require that software adapts, this is often not practical in enterprise computing where existing software may have specific hardware requirements; our cloud model supports heterogeneous resource pools, where the system automatically matches software requirements to hardware characteristics.
       Having pools of resources implies that multiple tenants (customers) will have their workloads on this environment and the infrastructure must provide the necessary isolation between fenced-off resource pools. Such multi-tenancy is not just for public clouds: even in a private cloud, the self-service model that gives consumers flexibility to deploy services with little oversight requires robust isolation between pools to prevent accidental impact on a neighbor.
    • Windows Server 2012 enables resources to be pooled via a variety of capabilities such as the Hyper-V extensible switch, Network Virtualization, Quality of Service (QoS) and network isolation policies. In addition, with enhancements in live and storage migration, the Windows Server platform enables resources to be moved easily across the datacenter, to optimize the use of datacenter resources.
    • System Center 2012 through the Virtual Machine Manager component can aggregate compute, network and storage resources and expose them as a construct called a “Cloud”. It supports managing these Clouds at scale, and dynamically placing workloads in them, with role-based access control mechanisms for multi-tenant isolation and delegation of clouds to consumers. In SP1, Virtual Machine Manager uses the platform capabilities of network virtualization and live and storage migration for more flexible pool management and to load-balance the environment so that customers SLA’s are met proactively.
  • Self-Service: In the cloud model, service consumers can use a self-service experience, typically a web-based portal, to access the capacity they have been allocated, self-provision workloads from standing up a single VM to deploying a complex service, and manage the life cycle of those workloads.
    • Windows Server 2012 goes a long way in enabling full datacenter automation. Self-servicing implies that all datacenter operations must be fully automatable, otherwise manual labor will be required every time a workload is placed on a cloud. Windows Server 2012 is fully automatable via PowerShell and WMI, exposing the necessary interfaces to enable this scenario.
    • System Center 2012 builds on the automation capabilities in Windows Server 2012 and provides portals and management capabilities to enable self-service. The Service Manager component provides a service catalog that drives a self-service portal for IT approval workflows such as allocating capacity. The App Controller component provides a self-service experience for administering virtual machines and services, covering both private cloud and the Windows Azure public cloud. The Operations Manager component provides the operational intelligence for the environment, and the Orchestrator component provides run-book automation. Lastly, the Data Protection Manager component of System Center implements business continuity policies.
  • Elastic: Cloud Elasticity means that the infrastructure can support the changing needs of the organization, deploying new services as needed, allocating more resources to services that experience heavy load or de-allocating resources to save power when the load is light. With cross-cloud management, workloads can also move between private and public clouds, providing extra capacity, geo-scale reach, or other characteristics as needed.
    • From the Windows Server platform perspective, elasticity is enabled by allowing multiple services running on different infrastructures to be interconnected via IPSec VPNs. Windows Server 2012 has new support for IKEv2 VPNs in the box, allowing it to easily interconnect private and public clouds.
         In addition, elasticity also means that it should be possible to easily move any workload across the cloud to public cloud providers. In current technologies, this is very hard to achieve because workloads tend to have a lot of networking assumptions embedded into them, such as fixed IP addresses and subnets. With Windows Server 2012 Network Virtualization, it is now possible to move a workload around while keeping its own IP addresses and decoupling it from the provider’s IP space.
    • System Center 2012 SP1 uses a platform capability for network virtualization in its network constructs. When a workload “network” is defined, System Center allows cloud consumers to deploy such networks on any cloud or on any physical network infrastructure that is made available to them.
         VMM not only allows elastic allocation and release of resources to services within a cloud, but also allows adding or removing capacity to the cloud itself, giving the appearance of unlimited capacity of the cloud as viewed by the service consumer. 
  • Usage Based: In the cloud model, customers are billed or at least get informed on their cloud resource usage based on their actual resource consumption.
    • Windows Server 2012 provides capabilities for detailed and granular metering information for core metrics such as CPU, memory, storage and network. In Windows Server 2012, these metrics follow the VM as it migrates in the environment.
    • System Center 2012 aggregates these consumption metrics and allows the cloud operator to show back or bill back based on their policies.

A detailed walkthrough of the various features and capabilities that make Windows Server 2012 a cloud-optimized OS can be found in the white paper Building an Infrastructure as a Service (IaaS) Cloud Using Windows Server 8.

 
Scenarios
As we can see from above, there are many aspects of a cloud. In this blog we will focus on the Service provider persona and specifically on how providers can stand up their private cloud infrastructure as it pertains to using SMB 3.0 as storage for VMs and using Hyper-V Network Virtualization with Windows Server 2012 Beta and the community technology preview (CTP)  of System Center 2012 SP1 Virtual Machine Manager (VMM). In future posts we will delve deeper into the other aspects of the cloud.

Standing up Cloud Infrastructure with System Center 2012 SP1
Let’s start by looking at how Hyper-V network virtualization is provisioned and managed from VMM. In System Center 2012, VMM introduced Logical Networks which abstracts the various definitions of networks in enterprise datacenters, allowing datacenter administrators to use the vernacular of the application owners who express their connectivity using terms as “I want my VM to connect to the CORP network”. A logical network could be defined differently for each datacenter site and automation in VMM ensures that when the VM is deployed the appropriate configuration is applied. With SP1, we introduce another abstraction over this called “VM networks”. Logical networks now pertain to the fabric networks and VMs and Services now only connect to “VM Networks”. A VM network can be realized by a VLAN, direct logical network or with Windows Server 2012 with Hyper-V Network Virtualization.
In the System Center 2012 SP1 CTP VMM only supports creating VM networks with Hyper-V network virtualization using Generic Routing Encapsulation (GRE) which is the long term preferred mechanism. In the final release of System Center 2012 SP1, we plan to support creating VM networks using IP Rewrite which is easier to deploy in existing environment and doesn’t require a change of network infrastructure, but does require a provider address (PA) for each customer address (CA) you allocate. I strongly urge you to read the great blog on Hyper-V Virtual Networking to get an understanding of how this technology works.
The PAs are allocated from the Logical network space so you should create a Logical network as you did previously and allocate an IP address pool from which VMM can pull addresses for the PA space. Next you need to create a VM Network, which is the network that will be used by the actual services being deployed. VM networks can be created with just a few clicks from the new node in the VM’s and Services view in the VMM console. A detailed step-by-step guide for this can be found here.

 

 

In the example above you can see that both the Tailspin network and the Wingtip Network have overlapping IP ranges. They are realized and automatically provisioned using Hyper-V Network virtualization, providing full isolation without any special hardware or additional software. When creating a VM, it can now be connected to this VM network, thereby allowing it connectivity to other VM’s on the same VM network, while keeping the VM isolated from other VM networks that belong to different customers even though they are using the same subnet.
For service providers who need to provide isolated environments to their service consumers (tenants), this capability is invaluable and provides the flexibility to enable the tenants to bring their own IP addresses to the public cloud environment. In the CTP, if you want the VM on a VM network to communicate with entities not on the VM network you will need to set up a gateway between these networks. This can be done using a Windows Server instance with the appropriate routing rules and you can expect a future guide to walk you through the process of how to set it up. In addition, System Center will allow this to be done seamlessly as we move forward with development.
Storage is another vital component of a cloud and virtualization project. With Windows Server 2012 we now have the ability to use SMB 3.0 file shares for hosting Hyper-V VM’s in a clustered and standalone environment. This helps drive the cost of cloud down while adding flexibility and making management easier. (You can read more about storage for cloud here.) System Center 2012 SP1 makes it very easy to use. The screen shots below depict how you can add a file share as storage for a cluster and for a standalone host, and VMM configures the Access Control Lists appropriately for this configuration.

 

 Standalone Host

 

Hyper-V Cluster


Once a VM is deployed onto a host and particular storage sub-system, the service provider desires flexibility to move the workload to different hosts or to use different storage to ensure that VMs are up even when the host needs to be serviced or the storage environment needs to be maintained. With Windows Server 2012 and VMM we now offer multiple options for live migrating the VM and its associated storage. You can:

  1. Live migrate the VM within a cluster (which normally has shared block or file storage)
  2. Live migrate the VM in and out of a cluster
  3. Live migrate the storage of the VM from one storage sub-system to the other
  4. Live migrate the VM from one host to the other (with no shared storage)

Just imagine the flexibility that this provides you as a datacenter administrator. The screenshot below depicts these various options from within VMM.

 


As you can see on the left side of the above screenshot, a VM called Tailspin_VM2 runs on a standalone host HV104. The dialog on the right shows that it can be migrated from this standalone host into nodes of the HVClusterA cluster (hv103n3, hv101n1 and hv102n2) as well as to the standalone HV105. System Center automatically detects there is no shared storage between HV104 and HVClusterA and tags these migrations as “Live (VSM)”, indicating that storage would be migrated too, and not just the virtual machines.

Note that System Center also gives you the option to storage migrate the VM’s storage within the host with no downtime for the VM. This is useful if for example you are running out of local storage on a particular drive and want to move the VM’s storage onto a different drive with more capacity on the host.

Now the perceptive would have noticed that we show only “Live” to HV105! Why is that? No it’s not a bug. To get an understanding of that let’s take a look at the storage property for HV104 (the host the VM is currently on) and HV105. As you will notice, each of these hosts see the same SMB 3.0 share and hence VMM can migrate the VM (without having to move the storage).

 

 

Summary
In this blog we discussed the cloud model and the two different cloud personas (“Service provider” and “Service Consumer”). We also described how Windows Server 2012 and System Center 2012 SP1 deliver this model. We highlighted how Windows Server 2012 and the Virtual Machine Manager component in System Center 2012 SP1 provide the ability for service providers to utilize SMB 3.0 storage for VM’s and create isolated networks using Hyper-V Network Virtualization. Over the next few months we will provide additional details of how VMM can facilitate resource pooling and tenant administration, and how it can utilize the plethora of capabilities in Windows Server 2012.



 

Your comment has been posted.   Close
Thank you, your comment requires moderation so it may take a while to appear.   Close
Leave a Comment
  • Sure hope that you can build proper storage resource pooling. I have a couple of servers, all with local disks. If I could build a single, resilient (redundant) storage pool out of that just with software that'd be great.

    Also, it'd be awesome if SCVMM could build a poor-man's cluster that would automatically replicate each VM to another server and start the VM there when the original server goes down. The technology is all there, it just has to be orchestrated properly.

  • The key word is orchestator... :)

    With scom and orch  you can easily pull off what you are asking.

    As for storage ... I'm holding for the same.  Something like gluster or hadoop ... though I don't see it in beta yet.

  • Hi.

    Why are all the images on the Microsoft blogs almost always to small to be useful? When clicking on the thumbnail view I'm expecting a bigger version, not the thumbnail again.

  • System Center 2012 is really a fantastic product in my opinion, I can not understand one thing, there is a chargeback capabilities integrated in SC 2012?

    Luca

  • From what I can tell, Microsoft storage pools exist solely within the confines of a single server, which sucks. If they could have duplicated gluster or hadoop in some fashion Microsoft could have killed the storage industry nearly overnight... which needs to happen (prices are ridiculous as it is).  Ahh well.

    As for SCVMM and being a poor-man's cluster .... well you said the magic word:  Orchestrator.  That is actually very VERY doable.  Only thing we'd be waiting on is SP1.

  • Thanks for the informative article, By the way, I request you  further to mention the download link for above products,  so far I searched and got confused. ( michael2k1@gmail.com)

    Thanks more in advance.

  • @U.P.B. Michael: Here's the link you requested: www.microsoft.com/.../details.aspx

  • Thanks for the suggestions Deiruch. Great feedback.

    On #1 (resource pooling of storage across servers where the storage is local to each server), this scenario is supported only when you have a JBOD pool accessible from all the servers, via SAS connectors for example.

    On #2, While it is enticing to build an alternative to clustering in this manner, it would be good to understand what about clustering you find as a high bar that you feel we need an alternative. There are some really interesting investments in Windows Server 2012 and System Center 2012 SP1 which simplifies the ability to cluster Hyper-V hosts in order to provide high availability for virtual machines. First, for a Hyper-V host cluster we need shared storage. Prior to Windows Server 2012, this required the use of a block based device (FC or iSCSI SAN), however now you can use a SMB3.0 share (Windows Server 2012, File Server and 3rd party NAS devices which support SMB 3.0). With System Center 2012 SP1 you can now easily expose and manage this file based storage for a Hyper-V host cluster. Second is the ability to create the cluster itself. Albeit the CTP release cannot be used to create the cluster itself, it is supported on SC 2012 VMM and will be supported before we are done with the SP1 release itself. This will give you a simplified experience to manage the storage and creation and deployment of the Hyper-V host cluster.  

  • Thanks for your work Jeffrey.

    Getting too much questions about this.

    Now a good blog post to point people to understand Building Cloud Infra.

    Sincerely,

    Murat Demirkiran