With Windows Azure PowerShell Cmdlets, the process to customize and automate an infrastructure deployment to Windows Azure Infrastructure Services (IaaS) becomes a relatively simple and manageable task. This article examines a sample script which automatically deploys seven specified VMs into three target subnets based on a network configuration file with two availability sets and a load-balancer. The entire deployment took a little more than half an hour.
The developer PowerShell script assumes Windows Azure PowerShell Cmdlets environment has been configured with an intended Windows Azure subscription. A network configuration file is in a specified location.
This file can be created from scratch or exported from Windows Azure NETOWRK workspace by first creating a virtual network. Notice in the netcfg file an affinity group and a DNS server are specified.
The four functions represent the key steps in performing a deployment including building a virtual network, deploying VMs, and configuring availability sets and load-balancers, as applicable.
Within the script, session variables and constants are assigned. The network information needs to be identical with what is defined in netcfg file since they are not automatically populated from an external source at this time.
To deploy multiple VMs programmatically, there is much information repeated. And in some parameters are defined with default values from session variables or the initialization section. When calling a routine, a default value can be overridden by an assigned/passed value.
Since other than the first VM, deploying additional VMs into an existing service does not need to specify the virtual network name. I however employ the same routing with VNetName specified for deploying the first and subsequent VMs into a service, it produces a warning for each VM after the first VM deployed into a service as shown in the user experience later.
Simply passing in an array to the routines and an availability set or a load-balancer will automatically built. These two functions streamline the processes.
Rather than creating a cloud service while deploying a VM, I decided to create individual cloud services first. This makes troubleshooting easier since it is very clear when services are created.
With a target service in place, I simply loop through and deploy VMs based on the names specified in an array. So to deploy or place VMs into an availability set or a load-balancer becomes simply by adding machine names into a target array.
Here, three frontend servers are load-balanced and configured into an availability set.
I kicked off at 3:03 PM and it finished at 3:36 PM. The four warnings were due to deploying additional VMs (specifically dc2, sp2013, fe2, and fe3) into their target services while in the function, DeployVMsToAService, I have VNetName specified.
Seven VMs were deployed with specified names.
dc1 and dc2 were placed in the availability set, dcAvSet.
fe1, fe2, and fe3 are placed in the availability set, feAvSet.
The three were load-balanced at feEndpoint as specified in the script.
All VMs were deployed into target subnet as shown in the virtual network, foonet.
The script also produces a log file capturing the states while completing tasks. As a VM is deployed, the associated RDP file is also downloaded into a specified working directory as shown below.
To make the code production ready, more error handling needs to be put in place. A key missing part of the script is the ability to automatically populated the network configuration from or to an external source. Still, the script provides a predictable, consistent, and efficient way to deploy infrastructure with Windows Azure Infrastructure. What must come after is to customize the runtime environment for an intended application, once VMs are in place and followed by installing the application. IaaS outs VMs in place as the foundation for PaaS which provides the runtime for a target application. It is always about the application, never should we forget.
Some noticeable advantages to run applications in Windows Azure are availability and fault tolerance achieved by the so-called fault domains and upgrade domains. These two terms represent important strategies in cloud computing for deploying and upgrading applications. System Center 2012 SP1 also has integrated the concepts into Virtual Machine Manager Service Template for deploying a private cloud.
The scope of a single point of failure is essentially a fault domain. And the purpose of identifying/organizing fault domains is to prevent a single point of failure. In a simplest form, a computer by itself connected to a power outlet is a fault domain. Apparently if the connection between a computer and its power outlet is off, this computer is down. Hence a single point of failure. As well, a rack of computers in a datacenter can be a fault domain since a power outage of a rack will take out the collection of hardware in the rack similar with what is shown in the picture here. Notice that how a fault domain is formed has much to do with how hardware is arranged. And a single computer or a rack of computers is not necessarily an automatic fault domain. Nonetheless, in Windows Azure a rack of computers is indeed identified as a fault domain. And the allocation of a fault domain is determined by Windows Azure at deployment time. A service owner can not control the allocation of a fault domain, however can programmatically find out which fault domain a service is running within.Windows Azure Compute service SLA guarantees the level of connectivity uptime for a deployed service only if two or more instances of each role of a service are deployed.
On the other hand, an upgrade domain is a strategy to ensure an application stays up and running, while undergoing an update of the application. Windows Azure when possible will distribute instances evenly into multiple upgrade domains with each upgrade domain as a logical unit of a deployment. When upgrading a deployment, it is then carried out one upgrade domain at a time. The steps are: stopping the instances running in the first upgrade domain, upgrading the application, bringing the instances back online followed by repeating the steps in the next upgrade domain. An upgrade is completed when all upgrade domains are processed. By stopping only the instances running within one upgrade domain, Windows Azure ensures that an upgrade takes place with the least possible impact to the running service. A service owner can optionally control how many upgrade domains with an attribute, upgradeDomainCount, in Windows Azure Service Definition Schema, .csdef file.
Within a fault domain, the concept of fault tolerance does not apply. Since all is either up or down and with no tolerance of a fault. Only when there are more than one fault domains and managed as a whole, is fault-tolerance applicable. In addition to fault domains and upgrade domains, to ensure fault tolerance and service availability, Windows Azure also has network redundancy built into routers, switches, and load-balancers. FC also sets check-points and stores the state data across fault domains to ensure reliability and recoverability.
I somehow got into writing PowerShell script in the last few days for facilitating deploying Windows Azure VMs into configured subnets for our upcoming Windows Azure IaaS Training Days. (Here’s the shameless plug.)
First I got started with Windows Azure PowerShell cmdlet and set up Windows Azure PowerShell. Then… this world is never the same anymore. I … see the light. Anyway, so this allows me to run PowerShell within a Windows Azure subscription account. It’s almost like having Microsoft datacenters in my living room, I can stop by and check it out anytime. Just open up my PowerShell ISE and there you are, play as hard as you want. So I wrote some script and ended up what I am showing here.
I packaged a few cmdlets together into function calls such that I can deploy a VM, and add it into a load balancer, and an availability set by arranging the calls like the following:
So the above code will deploy the first VM, fe1, to a target service followed by deploying two more VMs, fe2 and fe3, and load balance the three and put the three into an availability set.
And here’s an infrastructure I have set up in Windows Azure, this needs to be configured first, so employed Windows Azure DNS, an Affinity Group, storage account, and a virtual network with target subnets are all in place first. What I have set up are the following:
Then after about 2 consecutive sleepless nights and a lot of coffee, finally
Well actually it was 12:54 PM, I cleared my throat and said, “Ground control, this is Major Tom, over…” and here’s what happened:
… then lunch, email, more email… then about 20 minutes later, the job ended quietly
and examined the session history, here’s how it went:
and in my Windows Azure subscription, now there are eight VMs deployed into three subnets.
The three VMs deployed to the frontend subnet were load-balanced and placed in an availability set. Two SharePoint boxes were in an availability set and deployed to the midtier subnet. And the two VMs, dc1 and dc2 in an availability set, and a SQL Server 2012 were deployed to the backend subnet.
And a number of data disks were also attached to intended VMs accordingly:
The virtual network now looked like this after the deployment:
So never will I deploy a VM to Windows Azure without PowerShell again. The end.
Microsoft US Platform Technology Evangelists, have published a series of articles that step through building your very own Private Cloud by leveraging Windows Server 2012, Windows Azure Infrastructure as a Service (IaaS) and System Center 2012 Service Pack 1. This series presents the methodology of building a private cloud from overview, fabric concepts and construction, deployment, management, to extending it to Windows Azure and beyond.
And ultimately extend existing on-premise establishments with off-premise deployments and take the best of both models with a hybrid scenario. The following is a list of the entire deliveries. I recommend bookmarking this post (http://aka.ms/privatecloud) and the TechNet Radio series by Keith Mayer and Yung Chou (http://aka.ms/bpc) to walk through the entire process of building a private cloud.
My personal thanks to Keith Mayer (http://keithmayer.com) for his many contributions and leading efforts in organizing this series.
The process of building a private cloud needs to start with the fundamentals, both conceptually and technically. This blog post series intends to address both and goes through a process of forming the concepts, securing the fundamentals, examining what is under the hood, visualizing all with the abstraction. At the same time, TechNet private cloud solution portal at http://technet.microsoft.com/en-us/cloud/private-cloud offers IT professionals opportunities to further advance the learning in architecture, planning, and implementations of cloud and datacenter solutions with Microsoft technologies.
Below is the weekly breakdown of each topic published in this series to facilitate your learning.
Microsoft private cloud solutions are with the basic building blocks, Windows Server 2012, and the comprehensive management solution, System Center 2012 SP1. Both present key enabling technologies in building and extending a private cloud. IT professionals must recognize the urgency of mastering these two major technical components for maintaining relevancy in Microsoft infrastructure going forward.
Get prepared to follow along by first acquiring the essential components and building blocks for constructing and extending a private cloud!
Prepare for the MCSE: Private Cloud certification exams with these popular FREE exam study guides:
Reference: Hyper-V Replica Explained
Windows Server 2012 Hyper-V Replica is a built-in mechanism for replicating Virtual Machines (VMs). It can replicate selected VMs in real-time or asynchronously from a primary site to a designated replica site across LAN/WAN. Here a replica site hosts a replicated VM while an associated primary site is where the source VM runs. And either a replica site or a primary site can be a Windows Server 2012 Hyper-V host or a Windows Server 2012 Failover Cluster.
Hyper-V Replica is an attractive Business Continuity and Disaster Recovery (BCDR) solution since not only conceptually easy to understand, but technically straightforward to implement, as demonstrated by the following schematic and the lab introduced later in this article. Further, with many newly added capabilities and flexibilities in Windows Server 2012 such as Storage Space, continuously available file share, Hyper-V over SMB 3.0, etc. a BCDR is no longer an academic exercise, but with Hyper-V Replica a feasible solution within reach. It is an ideal and affordable BCDR solution for small and medium businesses.
Hyper-V Replica is also applicable to a failover cluster. And this lab, http://aka.ms/ReplicaBroker, details the processes, steps, and operations in implementing Hyper-V Replica Broker in failover cluster settings. For those who are new to Hyper-V Replica, a walk-through of Hyper-V Replica essentials is available at http://aka.ms/Replica.
With a failover cluster of Hyper-V hosts in place, one needs to first configure a Hyper-V Replica Broker role using Failover Cluster Manager. Then enable Hyper-V Replica on the Hyper-V Replica Broker. Ordinarily, Hyper-V Replica can be enabled in the Hyper-V Settings of a Hyper-V Host. However in a failover cluster, the Hyper-V Replica configurations are grayed out in an individual Hyper-V host’s Hyper-V Settings and to be managed through the Hyper-V Replica Broker role configured with Failover Cluster Manager.
The schematic below depicts that the employment of a Windows Server 2012 Hyper-V failover cluster as a Hyper-V Replica Site is through the cluster role, Replica Broker. Instead of operating directly onto a Hyper-V host as in a non-clustered environment, the process of replicating a VM is to now specify a Replica Broker as a replica site. The Replica Broker will then place the VM onto the associated failover cluster storage accordingly.
Windows Active Directory domain is not a requirement for Hyper-V Replica which can be implemented between workgroups or domains with a certificate-based authentication without the need to establish a domain trust relationship. Active Directory is nevertheless a requirement if involving a Hyper-V host which is part of a failover cluster, and in such case all Hyper-V hosts of a failover cluster must be in the same Active Directory domain.
With a failover cluster of Windows Server 2012 Hyper-V hosts, we now will enable VM replication in Hyper-V Replica Broker which is a cluster role to be configured with Failover Cluster Manager. This lab assumes a Windows Server 2012 Hyper-V failover cluster is in place, and presents the processes, steps, and operations to:
Here I am making this lab guide available as a download in pdf and available with a tweet to help sharing the information. I believe the content in http://aka.ms/Replica and this lab together form a solid foundation of Hyper-V Replica for IT professionals. Enjoy it and happy replicating.
Reference: Hyper-V Replica Broker
Windows Server 2012 Hyper-V Role introduces a new capability, Hyper-V Replica, as a built-in replication mechanism at a virtual machine (VM) level. Hyper-V Replica can asynchronously replicate a selected VM running at a primary site to a designated replica site across LAN/WAN. The following schematic presents this concept.
Here both a primary site and a replica site are Windows Server 2012 Hyper-V hosts where a primary site runs production or the so-called primary VMs, while a replica site is standing by with replicated VMs off and to be brought online, should the primary site experiences a planned or unplanned VM outage. Hyper-V Replica requires neither shared storage, nor a specific storage hardware. Once an initial copy is replicated to a replica site and replication is ongoing, Hyper-V Replica will replicate only the changes of a configured primary VM, i.e. the deltas, asynchronously.
Once all the requirements are put in place, first configure a target replica server, followed by configure an intended VM at a primary site for replication. The following is a sample process to establish Hyper-V Replica:
Identify a Windows Server 2012 Hyper-V host as the replica site and enable Hyper-V Replica in the Hyper-V settings on the host in Hyper-V Manager. The following are Hyper-V Replica sample settings of a replica site, Development.
Using the Hyper-V Manager of a primary site, right-click an intended VM to enable Hyper-V Replica by walking through the wizard to establish a replication relationship. Below shows how to enable Hyper-V Replica of A-Selected-VM at the primary site, VDI.
As applicable, carry out the initial replication of a target workload. If an initial copy is to be sent out over the network, this will happen in real-time or according to a schedule at the end of step 2.
Conduct a Test Failover event from the replica site by right-clicking the replicated VM and select the option to confirm the readiness of accepting a replication request and process the traffic.
Conduct a Planned Failover event from the primary site after shutting down an intended source VM as shown below.
The following information returns upon a successful execution a Planned Failover event of A-Selected-VM from a primate site where VDI as the source Hyper-V host to a replica site where Development as the destination host, for example.
Notice that successfully performing a Planned Failover will automatically establish a reverse replication relationship where then the before replica site (Development) becomes the current primary site and the before primary site (VDI) becomes the current replica site and start the primary VM. The Replication Health, accessible by right-clicking a VM with Hyper-V Replica enabled in Hyper-V Manager, reveals the current replication relationship. The following shows Replication Health information of the VM, VDI, before and after successfully performing a planned failover with a reverse replication relationship automatically set.
In the event that a source VM experiences an unexpected outage at a primary site, it is necessary to manually start the replicated VM at the replica site, while in this scenario a reverse replication relationship will not be automatically established due to some un-replicated changes may have lost along with the unexpected outage.
Conduct another Planned Failover event to confirm that the reverse replication works. In the presented scenario, the planned failover will be now from Development back to VDI. And upon successful execution of the planned failover event, the resulted (i.e. reversed) replication relationship should be VDI as a primary site with Development as the replica site. Which is the same state at the beginning of step 5.
1. Test Hyper-V Replica against VM mobility scenarios including Live Migration of Hyper-V Replica-enabled VMs and related storage among cluster nodes, SMB 3.0 shares, file servers, etc. This is where network virtualization (presented later in this book) becomes critically needed to provide transparency of network configurations at a VM level to minimize the technical complexities of relocating a VM or its associated storage.
Incorporate Hyper-V Replica configurations and maintenance into applicable corporate IT standard operating procedures and start monitoring and maintaining the health of Hyper-V Replica resources.
The replication process will replicate, i.e. create an identical VM in the Hyper-V Manager of the replica server and subsequently the change tracking module of Hyper-V Replica will track and replicate the write-operations in the source VM every a few minutes after the last successful replication regardless the associated vhd files are hosted in SMB shares, Cluster Shared Volumes (CSVs), SANs, or with directly attached storage devices.
Importantly, if to employ a Hyper-v failover cluster as a replica site, must use Failover Cluster Manager to perform all Hyper-V Replica configurations and management. And first create a Hyper-V Replica Broker role, as demonstrated below.
A Hyper-V Replica Broker is the focal point in this case. It queries the associated cluster database to realize which node is the correct one to redirect VM specific events such as Live Migration requests in a replica cluster.
Windows Active Directory domain is not a requirement for Hyper-V Replica which can also be implemented between workgroups and untrusted domains with a certificate-based authentication. Active Directory is however a requirement if involving a Hyper-v host which is part of a failover cluster, and in such case all Hyper-V hosts of a failover cluster must be in the same Active Directory domain with security enforced at the cluster level.
In a BC scenario and a planned failover event of a primary VM, Hyper-V Replica will first copy any un-replicated changes to the replica VM, such that the event produces no loss of data. Once the planned failover is completed, the replica VM will then become the primary VM and carry the workload, while a reverse replication is automatically set. In a DR scenario, i.e. an unplanned outage of a primary VM, an operator will need to manually bring up the replicated VM with an expectation of some data loss, since changes of the primary VM not yet replicated to the replicated VM have now been lost along with the unplanned outage.
We’re here for the final part in our Building a Private Cloud series and in today’s episode Keith Mayer and Yung Chou show us to manage our hybrid cloud environment with System Center 2012 SP1 App Controller. Tune in as they demo for us how to get started with this service as well as wrap up this fantastic series.
Websites & Blogs:
The entire series is also available via http://aka.ms/bpc.
The entire series is also available via http://aka.ms/bpc.
In Part 12 of their Building a Private Cloud series, Keith Mayer and Yung Chou explore the world of hybrid cloud scenarios using Windows Azure Virtual Networks. Tune in as they walk us through some common scenarios where IT Pros may find it more appealing to deploy a hybrid datacenter solution as they explain what components are required and how to build out and configure both your cloud and local networks.
Continuing their Building a Private Cloud with System Center 2012 SP1 series, Keith Mayer and Yung Chou walk us through the steps needed to dynamically optimize our virtual machines. Tune in as they show us how to set the reserve capacity for our hosts, balance VM loads, set-up auto live migrations and optimize our power consumption.
We’ve hit Part 10 in our Building a Private Cloud with System Center 2012 SP1 series, and in today’s episode Keith Mayer and Yung Chou discuss how you can protect your newly created private cloud with Data Protection Manager. Tune in as they discuss disaster recovery solutions and performance enhancements as well as demo for us how to setup and configure DPM for your environment and then lay out the process for backing up your servers and how DPM integrates with Virtual Machine Manager.
Keith Mayer and I had a chance to work on this project with our very best online content producer, Chris Caldwell. And what a productive experience and a fun time we have had. Keith’s extensive knowledge on many aspects of constructing a private cloud brought so much substance into every episode we recorded for this series. And I had the opportunities to offer my views on so many subjects discussed in the series.
Of these thirteen episodes, one will find out that each is an independent learning module with specific objectives, steps, and operations to carry out tasks, while the entire series collectively presents the methodology, processes, and milestones to conceptualize, design, and implement a private cloud.
Your call to action is to download Windows Server 2012 (http://aka.ms/8) and System Center 2012 SP1 (http://aka.ms/2012), review this TechNet radio series, and follow through the entire methodology (http://aka.ms/privatecloud) to learn and build your private cloud.
For this topic and like many others in cloud computing, the first order of business is to know what exactly a hybrid cloud is. Cloud is a broad subject and covers very much every aspect of IT from hardware acquisition, infrastructure, deployment, management, all the way to decommission of a workload. Without first specifying what it is, there will be so many opportunities to miscommunicate, misunderstand, and misinform this subject. Clarity is critical in a cloud computing conversation.
So what is a hybrid cloud? It is officially defined in NIST SP 800-145 which states
“The cloud infrastructure is a composition of two or more distinct cloud infrastructures (private, community, or public) that remain unique entities, but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load balancing between clouds).”
This definition is however a confusing one with a rooted discrepancy. A composition of infrastructures is not a well established concept. And instead of clarifying what a hybrid cloud is, this hybrid cloud definition brings more questions than answers. Further a private cloud, a public cloud, and a community cloud are all defined according to the intended users. A hybrid cloud is however based on implementation, i.e. a composition of infrastructures. This change of criteria makes a hybrid cloud an inconsistent and disjoined concept among those defined in 800-145. A fundamental discrepancy between the definition of a hybrid cloud and the rest of the three (i.e. private, public, and community) clouds is that the former is defined on implementation of infrastructure while the latter three are according to the intended users. The definition of a hybrid cloud insufficiently addresses the fact that the intended users remain either a target group of users or the general public. The truth is that a hybrid cloud extends and remains a private cloud or a public cloud.
A hybrid cloud is a concept of implementation and a service provider’s concern. The term, hybrid, here indicates that a cloud solution is with some resources deployed on premises and some off premises. Therefore, a hybrid cloud is an on-premises cloud with some resources deployed off premises. Similarly, an off-premises cloud with resources deployed on premises is also a hybrid cloud. Both definitions are from a resource placement’s viewpoint, and not based on the intended users.
For example, an on-premises private cloud integrated with resources deployed to Windows Azure (which runs off-premises in Microsoft datacenters) becomes a hybrid cloud. At the same time, an application deploys to Windows Azure and references corporate on-premises resources is also a hybrid cloud. In summary, a private or public cloud (application) with resources in more than one cloud deployment model makes it a hybrid cloud.
Therefore a hybrid cloud is intended for either a targeted group of users or the general public. Hence a hybrid is essentially a private cloud or a public cloud, respectively.
The significance of managing a hybrid cloud is due to the potential complexities and complications of operating on resources deployed across various facilities among corporate datacenters and those of 3rd party cloud service providers. Multiple management software, inconsistent UI, heterogeneous operating platforms, non- or poorly-integrated development tools, etc. can and will noticeably increase overhead and reduce productivity. A unified operating platform, highly integrated development tools, a comprehensive management suite, a consistent user experience, and simple user operations are critical needs to fundamentally minimize the TCO and accelerate the ROI of a hybrid cloud. Here, App Controller, a self-service vehicle for users with delegated authorities, is part of System Center 2012 SP1 to form a comprehensive cloud management solution.
A hybrid cloud presents many challenges of managing resources among multiple cloud service providers. With the technical complexities and administration overhead to manage individual cloud in a multi-tenant environment, IT can easily become a bottleneck of managing who can do what, where, when, and how much for each and every individual hybrid cloud deployment among on-premises and off-premises facilities. Delegating authorities to those business and technical roles responsible for an application including solution architect, release owner, application developer, and technical support for them to self-manage and maintain their deployments themselves is an effective and efficient way to manage a hybrid cloud environment. In System Center 2012 SP1, both SCVMM and App Controller have the same self-service model and delegation of authority built in with User Role profiles and Run As accounts.
A user role (profile) is a set of policies and constraints which define not only who can access what, but when, where, and what operations can be operated on an authorized resource and how much can be consumed. An authorized user can create a user role and assign membership in the Settings workspace of both App Controller and SCVMM admin console to manage delegation of authority. The follow shows a few default user role profiles configured for a Stock Trader application in SCVM M2012 SP1 including Delegated Administrator, Tenant Administrator, and Self-Service User. While Alice is one with a Delegated Administrator Role which has access to specified resources comprising a number of Run Ac accounts.
In an SCVMM-based deployment, user credentials for accessing resources can be provided by a Run As account which is a container for a set of stored credentials. Only administrators and delegated administrators can create and manage Run As accounts. Read-only administrators can see the account names associated with Run As accounts that are in the scope of their user role. As shown below, Run As accounts are managed in the Settings workspace under the Security folder of an SCVMM admin console. Here, cloud admin is a Run As account configured with a specified set of user credentials.
Once a Run As account is configured for a user role, a user with the user role can then reference the Run As account for accessing and operating on objects accessible to or created by the Run As account. The user can simply reference the Run As account and does not need to actually present to the user credentials of the Run As account. In other words, a Run As account delegates the authority to a user without the need for the user to know and present the user credentials of the Run As account. Not only this is a defense-in-depth strategy, i.e. providing minimal knowledge and hence exposing minimal attack surface, but also with maximal transparency in account administration since changing the user credentials of the Run As account, should it become necessary, will be transparent to the references of (i.e. a symbolic link to) the Run As account.
To expose resources of a hybrid cloud, an authorized user must connect App Controller with SCVMM servers where on-premises private clouds are deployed, and service providers and Windows Azure subscriptions hosting resources off premises, as applicable. Before initiating a connection from App Controller to either a Windows Azure subscription or a 3rd party cloud service provider, a corresponding x.509 certificate, i.e. the public key, must be first put in place. For instance, the following Windows Azure screen capture presents a list of x.509 certificates uploaded into the Setting workspace of Windows Azure management portal for establishing secure connections with particular subscription IDs.
At the same time, in App Controller, an authorized user can then configure and establish secure connections with off-premises deployments in the Connections and Subscriptions pages of the Settings workspace as illustrated here on the left.
For a secure connection, App Controller will in the process acquire a private key from a user-specified certificate in Personal Information Exchange, or pfx, format. As needed, use mmc (Microsoft Management Console) with Certificates add-in to export the private key of an intended certificate to generate the certificate in pfx format. The following presents the user experience of configuring secure connections in App Controller.
App Controller, a member of System Center 2012 SP1, provides a self-service experience to configure, deploy, and maintain virtual machines and services across private and public clouds with secure connections. Particularly, for managing a hybrid cloud of System Center 2012 SP1 Virtual Machine Manager and Windows Azure, App Controller offers a web-based self-service vehicle seamlessly integrating the two. The following two screen captures show an instance of App Controller managing an on-premises SCVMM-based private cloud and an off-premises deployment associated with a Windows Azure subscription.
App Controller is a web-based self-service portal. Once authenticated, a user will have access to and can operate upon authorized deployment instances and resources accordingly to the user role assigned to the user. The above screen capture is an example of managing a VM of a service deployed to an on-premises private cloud. And below shows a similar desktop experience while operating on a Windows Azure VM deployed to one of Microsoft datacenters.
Notice that with App Controller, all clouds and any clouds can be managed with a single pane of glass, a user experience consistent with that of Windows desktop operations, and in a self-servicing manner. Which is significant since it fundamentally minimizes the support cost in the long run, accelerates the user adoption, and makes it easy for an authorized user to consume resources with familiar Windows operating environment and desktop task routines.
With App Controller connecting with SCVMM and Windows Azure, an authorized user now has an opportunity to copy a VM to Windows Azure or a 3rd party facility and easily extend a private cloud beyond a corporate datacenter. Copying a VM can be done with familiar desktop operations and without the knowledge of the underlying cloud infrastructures since App Controller does not reveal SCVMM fabric.
The process is to first store a candidate VM in SCVMM followed by copying the VM to Windows Azure as illustrated in the following schematic. In an SCVMM-based deployment, storing a VM can be initiated from either App Controller or SCVMM admin console. It essentially exports the VM instance from its current location, the default VM path specified in the Placement property of a host where the VM instance lives, to a network share specified in the Stored VM path of the associated cloud library properties. The store process encapsulates the current state of, export, and make the VM portable and redeploy-able.
One can copy a VM to Windows Azure with App Controller, however not SCVMM admin console which is intended for managing on-premises private clouds and fabric, and does not have a graphical user interface to directly connect with Windows Azure or a 3rd party cloud service provider. Depending where a VM is copied to, the process wizard will lead a user through an applicable scenario. If the destination is a storage container, copying a VM is just transmitting a set of files associated with the VM. If a VM is being copied into a cloud service, the process will copy followed by deploying the VM. A walkthrough of storing and copying a VM from a private cloud to Windows Azure is available at http://aka.ms/copyVM. This process is an important delivery of App Controller to form a hybrid cloud.
A stored VM in SCVMM can be later redeployed and in such case a copied VM in Windows Azure can become a secondary or duplicated workload, or vice versa. Either way, with App Controller there can be just a few mouse-clicks away to relocate, duplicate, backup, or redeploy VMs. Many scenarios perhaps previously technically challenging or financially cost-prohibitive to implement like business continuity and disaster recovery, production simulation or duplication, transaction log reruns, training in production, etc. now all become an IT reality and practical business opportunities. This is indeed an exciting time of IT.
Cloud levels the field and make big corporations humble and rethink how they do businesses, and offer small and medium companies a stage to act big and compete with establishments at the same level with global reach. A hybrid cloud brings many interesting scenarios and business opportunities in backup and restore, business continuity, disaster recovery, branch office deployments, remote access, development, prototyping, training, and on and on. Companies of all sizes either as service providers, consulting services, or product development now all have the opportunities to play a major role, lead, and contribute to the hybrid cloud ecosystem.
A strategic step in realizing a hybrid cloud is to in the next opportunity instate a common management platform for deploying and managing all clouds and any clouds, on or off premises. A comprehensive solution like System Center 2012 SP1 should be brought in as early as possible, specifically for virtualization and cloud computing SCVMM must be in place to form private cloud fabric. And as the private cloud fabric is formed, so grow all the clouds. And employ App Controller for delegating authorities, enabling users to consume resources in a self-servicing manner, and facilitating the development of hybrid cloud. Start with private clouds, but always plan hybrid, and that is an emerging IT computing model.
We, Microsoft Platform Technology Evangelists, this month are writing a series of articles that step through building your very own private cloud by leveraging Windows Server 2012, Windows Azure Infrastructure as a Service (IaaS) and System Center 2012 Service Pack 1. Week-by-week, we’ll be walking through the steps to envision, plan and implement your very own private cloud to take your existing data center to the next level.
The significance of managing a hybrid cloud is due to the potential complexities and complications of operating on resources deployed across various facilities among corporate datacenters and those of 3rd party cloud service providers. Multiple management software, inconsistent UI, heterogeneous operating platforms, non- or poorly-integrated development tools, etc. can and will noticeably increase overhead and reduce productivity.
The entire blog post series is at http://aka.ms/PrivateCloud with the latest updates.
In part 9 of their Building a Private Cloud with System Center 2012 SP1 , Keith Mayer and Yung Chou tackle the process of servicing and updating our private cloud applications and the underlying fabric. Tune in as they show us step-by-step how to update our service templates as well as how to apply and manage critical updates to our applications.
Please come with an active Windows Azure subscription (http://aka.ms/90)
In Part 8 of TechNet Radio Building a Private Cloud with System Center 2012 SP1 series, Keith Mayer and Yung Chou explore the topic of how to deploy and manage your applications to the private cloud. Tune in as they discuss how to setup and create profiles for your virtual machine templates.
Leaping forward and setting a new standard for hybrid cloud, the next generation of enterprise computing, Windows Server 2012 is. While cloud computing is emerging as a service delivery platform, newly introduced capabilities and features make Windows Server 2012 not only a technology transformation vehicle of enterprise IT, but a skill upgrade and a career accelerator for IT professionals. Here, seven noticeable capabilities and features highlight the significance and a terse learning roadmap of this release. An evaluation copy of Windows Server 2012 in both ISO and VHD format is available at http://aka.ms/8.
1. Installation Options and Feature On Demand
There are now three installation options of Windows Server 2012 and changeable on demand. Server Core, Minimal Server Interface, and Server with GUI are the three types of installation while a system administrator can as needed change the installation type from one to another by installing or uninstalling associated windows features, as shown below, such that Server Manager, MMC, or full graphical interface can be available to facilitate administration or disabled for better security. The post, Windows Server 2012 Installation Options, has additional details. Additionally with Features On Demand, a new capability of Windows Server 2012, a user can remove or add back a server feature based on business needs with PowerShell commands.
With the installation options and Features On Demand, a system administrator can now as needed enable or disable the UI level of a server installation and operate with maximize productivity without permanently altering the installed payloads.
2. Storage Space and Thin Provisioning
With just a bunch of disks (or simply JBOD), Windows Server 2012 Storage Space offers an abstraction to manage selected disks from JBOD as one entity, a so-called Storage Pool. In File and Storage Services of Server Manager, a user can first define a storage pool, create a virtual disk which defines the storage layout of a Storage Pool, then format and create a volume out of the virtual disk file. Windows Server 2012 can operate as a software RAID controller and offers three storage layouts, Simple/Mirror/Parity comparable to RAID 0/1/5. The storage capacity of a Storage Pool includes Thin and Fixed provisioning methods while the formal can over-subscribe and allocate storage capacity just in time and the latter acquires the specified storage capacity upon implementing configuration. The post, Windows Server 2012 Storage Virtualization Explained, further examines the concept.
3. Hyper-V over SMB
Server Message Block (SMB) is a network file sharing protocol allowing applications to read and write to files and requesting services from server programs in a computer network. Windows Server 2012 introduces the new 3.0 version of SMB protocol. A Windows Server 2012 Hyper-V host can employ SMB 3.0 file shares as shared storage for storing virtual machine (VM) configuration files, VHDs, and snapshots. Further, SMB file shares can also store user database files of a stand-alone SQL Server 2008 R2. This capability is significant since VMs or databases can now be dynamically migrated into a SMB 3.0 share which is natively supported by Windows Server 2012.
4. Continuously Available Fire Share
Scale-out file servers, one of the available cluster roles, are built on top of the Failover Clustering feature of Windows Server 2012 and the SMB 3.0 protocol enhancements. Scale-out file servers can scale the capacity upward or downward dynamically as needed. On other words, one can start with a low-cost solution such as a two-node file server, and then later add additional (up to 4) nodes without affecting the operation of the file server. The following depicts the logical steps of constructing a Continuously Available File Share infrastructure in a Windows Server 2012 environment.
SMB 3.0 is capable of simultaneous access to data files with direct I/O through all the nodes in a file server cluster. This improves the usage of network bandwidth and load balancing of the file server clients, and also optimization of performance for server applications. SMB scaling out requires Custer Share Volumes version 2 which is included in Windows Server 2012.
5. Hyper-V Replica
With Hyper-V Replica, administrators can replicate their Hyper-V virtual machines from one Hyper-V host at a primary site to another Hyper-V host at the Replica site. This feature lowers the total cost-of-ownership for an organization by providing a storage-agnostic and workload-agnostic solution that replicates efficiently, periodically, and asynchronously over IP-based networks across different storage subsystems and across sites. This scenario does not rely on shared storage, storage arrays, or other software replication technologies.
For small and medium business, Hyper-V replica is a technically easy to implement and financially very affordable disaster recovery (DR) solution.
Information of Hyper-V Replica in a cluster setting is also available at http://aka.ms/ReplicaBroker.
6. Shared Nothing Live Migration
Live Migration is the ability to move a virtual machine from one host to another while powered on without losing any data or incurring downtime. With Hyper-V in Windows Server 2012, Live Migration can be performed on VMs using shared storage (SMB share) or on VMs that have been clustered.
Windows Server 2012 also introduces a new shared nothing live migration where it needs no shared storage, no shared cluster membership. All it requires is a Gigabit Ethernet connection between Windows Server 2012 Hyper-V hosts. With shared nothing live migration, a user can relocate a VM between Hyper-V hosts, including moving the VM's virtual hard disks (VHDs), memory content, processor, and device state with no downtime to the VM. In the most extreme scenario, a VM running on a laptop with VHDs on the local hard disk can be moved to another laptop that's connected by a single Gigabit Ethernet network cable.
One should not assume that shared-nothing live migration suggests that failover clustering is no longer necessary. Failover clustering provides a high availability solution, whereas shared-nothing live migration is a mobility solution that gives new flexibility in a planned movement of VMs between Hyper-V hosts. Live migration supplements failover clustering. Think of being able to move VMs into, out of, and between clusters and between standalone hosts without downtime. Any storage dependencies are removed with shared-nothing live migration.
7. Hyper-V Network Virtualization
Network virtualization is conceptually very similar to server virtualization which is to run multiple server instances on the same hardware while each server instance as if runs on dedicated hardware. The following depict the concept.
The implementations of network virtualization include two methods:
GRE employs one host IP address with minimal burden on the switches and full MAC headers and explicit Tenant Network ID marking for traffic analysis, metering and control. Address Rewrite works on existing NICs, switches, and network appliances and is immediately and incrementally deployable today. Windows Server 2012 Hyper-V leverages both methods.
The following is a sample of Hyper-V network virtualization in action while the two domains, contoso.yc and fabrikam.yc, are running on the same Hyper-V host on the same NIC and separate VLANs. The shown DNS configuration on the DC of each domain is with the same IP configurations while the two domains are isolated from each other and as if either runs on a dedicate network fabric.
There are two abstractions, Private Cloud and Service, in VMM directly relevant to an application deployment. Here a service is an instance of an application which may consists of a set of VMs to form the application architecture. To better understand the deployment process and associated settings in VMM, these two abstractions are important to know.
Specific in VMM, these two abstractions are complementary and coupled. A private cloud is a home of a service. A service only lives in a private cloud. While a private cloud may not yet have a service deployed, the existence of a service denotes a target private cloud is in place.
It is a container to keep services. This container is configured with a set of constraints (on resource consumption and access control) which a deployed service is automatically imposed upon. A private cloud by itself has not much meaning, it is just an artifact with a set of variables and parameters. Only when a private cloud is hosting a service, the cloud becomes functional, enforcing, and significant. In the context of VMM, one should distinguish a private cloud itself from a service running in a private cloud. And they are not to be used interchangeably.
A service is an application instance which may consist of a set of VMs (representing an application architecture)collectively delivering a business function. This set of VMs can be identified, operated, and managed as one logical entity. In my view, this abstraction is critical for Microsoft private cloud implementations.”
Since VMM can deploy a set of VMs forming the application architecture as one logical entity, the processes, tasks, operations, and interdependencies of configuring a target and possibly distributed runtime environment (i.e. platform) and the subsequent installation process of an intended application possibly spreading among multiple VMs can then be serialized, as needed, and orchestrated. The ability to manage an entire set of VMs as one entity makes transitioning from IaaS, to PaaS, and then SaaS possible. In other words, a service deployment to a private cloud in VMM can start with first creating a set of VMs representing the application architecture (IaaS) followed by configuring the runtime (PaaS), and finish with installing and substantiating an intended application instance (SaaS). This service-based deployment is essentially an SaaS offering which is the ultimate goal of cloud computing and what a cloud service consumer wants and expects.
With the understanding of a private cloud, a service, and their relationship in mind, the strategy is to delegate authority to a private cloud, i.e. the container, which will then automatically apply to those services deployed to the cloud.
In VMs and Services workspace of VMM admin console, create a private cloud (container) as shown on the left and set the constraints and access control as intended. The defense-in-depth strategy on security, resource consumption, manageability in general is very obviously integrated into VMM everywhere. A constraint can be set in a cloud, a service, a template, a user role, etc. i.e. various logical layers, based on needs.
The security tools for delegating authority including RunAs account and user roles profiles in the Settings workspace of VMM Admin Console.
There are scenarios that an authorized user or service may need an elevated right. Rather than giving out the credentials of a privileged account, a VMM admin can create a RunAs account with the privileged account’s credentials and offered the RunAs account instead. A RunAs account is similar to a symbolic name linked to a protected account to eliminate the needs to share out the credentials of the protected account. The following is a sample.
A user role profile in VMM security model is in essence a set of policies on who can do/consume what, when, and how much. A user with an assigned user role will be subjected to the policies upon bring authenticated by VMM. This model enables a VMM admin to map a business or functional role with a set of security policies. The logical steps to create a user role include:
This is basically to identify the functional requirements of a business role relevant to access control and constraints on a target cloud. And notice the considerations here are from a service provider’s view point, while the customers/consumers are those who own/develop/deploy/manage an examined service deployed to a target cloud.
There are four user role profiles as shown below left presented in VMM Settings workspace when creating a user role. Map the business functions of a role into an intended user role profile. Considering a typical service deployment, for example, there are business functional roles as shown below right.
Architecture Owner defines the solution architecture/infrastructure and needs the access to all including running instances and everything above and below fabric relevant to a service, hence this role is with a delegated administrator user role profile.
Service Owner owns a deployment and cares about keeping a service up and running to fulfill an SLA, and not so much about what is technically happening under the hood. A tenant administrator user role profile fits well.
Application Admin supports the service by performing basic support and maintenance routines, and escalates issues to Service Owner as needed. A self-service user role profile with proper scope of actions will work.
Here is a set of sample settings of a user role.
Notice that in VMM the delegation of authority can be scoped only at a private cloud level which then implicitly applies to a service deployed to the cloud.
In VMM 2012 SP1, a service or a VM is to be deployed with a service template or a VM template, respectively. A template is a cookie cutter with definitions, processes, configurations, and operations of deploying a resource. Deployment based on a template can provide consistency and predictability on runtime independent configurations like hardware settings, added server roles and features, application installation procedures, upgrade domains, scalability, etc.
The above left shows a service template is to be configured for deployment first with the service template designer and the above right depict the user experience on specifying a destination for the deployment.
Upon successful deploying a service, a VMM admin can log in App Controller as shown below left or VMM Admin Console as shown below right with an intended user role and verify the delegation of authority. In this case, a VMM admin will need to assign herself with the intended role, and in the log-in process there will be then an opportunity to specify an intended user role.
For enterprise IT, private cloud is becoming the next big thing to build upon virtualization. Just review any technical media and you will find private cloud is mentioned and discussed over and over again. While many may argue that highly virtualized computing is a private cloud, there are fundamentally different. One key idea of private cloud is a service-based deployment, as opposed to virtualization which is a virtual machine-focused roll-out. The significance of private cloud in many aspects of IT will become immediately apparent once you have a chance to build and test one. I highly encourage you to take opportunities and download the listed trial software, practice, and become familiar with the basic admin of Windows Server 2008 R2 SP1 and System Center Virtual Machine Manager 2012 to best prepare yourself. The technologies are already there and the opportunity has arrived for you to become the next private cloud expert.
Yung Chou's Slides (PDF) and other shared resources
Must-read for IT pros:
Free eBooks, Trainings, and Downloads:
Essential Certifications for Private Cloud:
We’re here for Part 7 of our TechNet Radio Building a Private Cloud series with System Center 2012 SP1 , and in today’s episode Keith Mayer and Yung Chou show us how to create and delegate private clouds in Virtual Machine Manager. Tune in for another demo heavy session and see how you can create private clouds and assign host groups, logical networks, storage and load balancers as well as how to then create user roles, profiles and resources.
Back for Part 6, Keith Mayer and Yung Chou show us how to configure compute fabric in System Center 2012 SP1 Virtual Machine Manager. In this episode they show us the process of adding and configuring hosts and clusters like Windows Server, Citrix XenServer and VMware vCenter into VMM as well as how to manage host resources.
The idea of copying a VM deployed on premise to Windows Azure is a significant one. Not only porting a VM from one datacenter facility to another can be a technical challenge, but the ability to employ a cloud service provider’s off-premise facilities like Windows Azure as an extension of on-premise deployments introduces new scenarios with exciting business opportunities. The VM portability and copying form an on-premise private cloud and an off-premise facility of a cloud service provider’s approve the technical framework and sets a new standard of enterprise computing. This is a hybrid cloud in making.
The concept is quite simple. A VM stored on premise is to be copied to an off-premise facility, Windows Azure offered by Microsoft in this case. Once copied, the VM will run transparently in Windows Azure. The following schematic presents a conceptual model showing a VM stored and managed by System Center Virtual Machine Manager 2012 SP1 can be copied to a designated Windows Azure cloud service with Infrastructure as a Service, or IaaS. Similarly, a VM running in Windows Azure can also be copied down to an on-premise setting.
This model in essence realizes employing off-premise facility as sites with the ability to realistically replicate on-premise workloads for back-up and restore, DR, development, test, QA, etc. without the need to acquire and manage hardware, which dramatically simplifies the requirements and processes. The potentials with this model of reducing TCO and accelerating ROI for implementing, for example, a backup-and-restore solution of a large deployment can be very significant.
To copy a VM from an SCVMM 2012 SP1-based private cloud to Windows Azure, some of the operations including the Copy VM operation are to be carried out with System Center 2012 SP1 App Controller. And an authorized user doing the Copy VM operation must first establish connections in App Controller with (1) an associated SCVMM 2012 SP1 which runs the private cloud where a target VM runs and (2) an intended Windows Azure subscription, such that both on-premise private clouds and VMs managed by the SCVMM 2012 SP1 and those services and VMs deployed in Windows Azure under the subscription are both presented and operable from a single pane of glass, i.e. the App Controller UI. The following presents a user’s experience in examining resources deployed to on-premise SCVMM-based private clouds and off-premise Windows Azure cloud services associated a particular subscription being connected.
In SCVMM Administration Console, the private cloud which a target VM is deployed to must be configured with a stored-VM path in the cloud properties.
A VM to be copied must be first “stored” which puts the VM in a saved state and exports the VM to the location where the stored-VM path points to.
And the STORE process saves the current state of and exports the VM to the location which the Stored VM path points to as shown below:
The Copy VM operation involves a few tasks and is to be carried out with App Controller. In the process, a designated cloud service in Windows Azure is either identified or created. An associated Windows storage account is to provide storage space. Upon finishing configuring the cloud service, a user then deploy the service which will copy (i.e. upload) the VM to the storage account followed by deploying the VM to the service and bringing the VM to a running state.
The above diagram highlights the requirements and key operations to copy a VM with App Controller. Compare with what has happening under the hood, the user operations is drastically simple and elegantly streamlined. In App Controller, fist right-click to STORE a target VM. As the operation completes and the state of the VM changes to Stored, now right-click the stored VM and COPY it. Which will kick off a process to associate the VM with a designated service and a specified storage account in Windows Azure. The uploading VM happens when the cloud service is deployed from App Controller. This may take a few minutes or longer since one or more GB VHD files are to be upload to the Windows Azure storage account.
Upon having deployed the designated Windows Azure cloud service, the process will then start the copied VM and bring it to a running state as highlighted below in Windows Azure management portal of the associated Windows Azure subscription.
And in App Controller, an authorized user should now see the copied VM listed out with the same state as a resource of the associated Windows Azure subscription.
Employing 3rd parties’ off-premise facilities as an extension of on-premise deployments is an emerging computing model to further advance a private cloud into a hybrid one. The essential building blocks of a private cloud is a Windows Server 2012 with Hyper-V, while the private cloud fabric is constructed and managed by System Center 2012 SP1. Now with the introduction of IaaS in Windows Azure, IT can now employ Windows Azure as an off-premise site and an extension by using App Controller as a single pane of glass to manage resources regardless they are deployed on premise or off premise.
Among the members of System Center 2012 release, App Controller seems getting more attention than the others in the suite. And the reason is probably because App Controller directly answers the need to have a single pane of glass to manage both public and private clouds.
A single pane of glass means seamless integration of multiple components, aggregate of information from multiple sources, fewer passwords to manage, less training needed, fewer helpdesk calls, more user productivity, higher satisfaction, and on and on and on. The long-term impact upon ROI and user satisfaction by providing a mechanism to manage resources deployed on premise and off premise with an integrated view can be very significant.
Here, in this last article of this 5-part series on VMM 2012 as listed below, I would like to offer a quick overview of App Controller, an essential add-on of VMM 2012.
Here I want to encourage you to download System Center 2012 trials available from this download Page, practice and experiment, get a head start in becoming the next private cloud expert in your organization.
A View of All
For public cloud, private cloud, and something in between, App Controller has a lot to offer to both a cloud administrator and a self-service user. App Controller is an add-on of VMM 2012 and a web-based interface configured as a virtual directory in IIS. A connection between App Controller and applications deployed to Windows Azure Platform in public cloud requires internet connectivity, certificates, Windows Azure subscription ID and credentials. To connect to a private cloud, a self-service user will log in the associated VMM 2012 server with AD credentials. The access control is a role-based model by Windows Authorization Manager, i.e. AzMan. So what a self-user can see or do are all trimmed and predefined.
The following shows App Controller connecting with two private clouds (PetShop and StockTrader) deployed by VMM 2012 and two subscriptions (Bata Test and Yung Chou’s production account) of Windows Azure Platform in public cloud. In this setting with App Controller, I was able to deploy and manage StockTrader as a private cloud in VMM 2012, at the same time publish and administer Windows Azure applications in public cloud, both requiring and with secure channels.
In addition to the ability to connect to a private cloud and a public cloud at the same time, another distinct feature of App Controller is to enable an authorized user to deploy a service to a private cloud in VMM 2012 without the need to reveal the underlying private cloud fabric. Technically this is such a complex infrastructure can be easily presented with convoluted processes and confusing settings. Instead, a UI gracefully designed with a keep-it-simple approach offers a quite remarkable user experience.
Notice in the App Controller UI, fabric is not visible despite a logon is with VMM admin privileges. This allows a cloud administrator to enable service owners to deploy applications to private clouds based on their needs in a self-servicing fashion, while still having a total control of how the infrastructure is configured and managed which is abstracted by the fabric. This is a great story.
Service Upgrade with App Controller
Personally I find the upgrade of a service with App Controller most exciting. To upgrade a service running in a private cloud deployed by VMM 2012, a self-service user can simply apply a new service template to an intended instance of the service. Technically it can be operationally carried out in a few mouse clicks. Depending on the Upgrade Domain and Fault Domain (similar to what are in Windows Azure Platform) of the service and what kind of updates are made to the service, there may or may not any service outage required. Here just to highlight the process, the following captures the App Controller screen for a self0service user to confirm upgrading a running instance of the StockTrader service from release 2011.11 to 2011.11.24.
Notice that in VMM 2012, the self-service model for deploying a private cloud is via VMM 2012 admin console or App Controller. The formal is a Windows application, while the latter is a web-based interface. There is also a self-service portal one can install for just VM-based deployment.
VMM 2012 is a begging of a new era. Infrastructure and deployment can no longer be the excuses for IT to prolong, delay, and procrastinate. The expectation now is not what or if, but how fast IT can deliver it.
Although the establishments already deployed may not be reconfigured, reengineered, or replaced as quickly as people would like to see. Customer’s expectations continue to raise. The mindset of IT pros must change from “how I may not be able to deliver it” to “ what is a customer’s needs and how soon IT will make it happen”.
A few years ago, many thought virtualization would be relevant to only enterprise IT, while today virtualization has become a core skill set and no longer a specialty. Those who still believe private cloud is remote and not applicable, may wake up tomorrow and realize everything is moving and changing towards cloud much faster and in a larger scope than anticipated. Private cloud is a highly technical subject and there is simply, in my view, no shortcut to master it. Investing time and learning VMM 2012 and App Controller in the old-fashioned way by getting hands dirty is what I have done and will continue doing. Start today. Start now. Start from building your VMM test lab. Study, practice, and deploy your own cloud. And you are then on a road to becoming the next private cloud expert in your organization.
[To Part 1, 2, 3, 4]
Windows Server 2012 Storage Space subsystem now virtualizes storage by abstracting multiple physical disks into a logical construct with specified capacity. The process is to group selected physical disks into a container, the so-called storage pool, such that the total capacity collectively presented by those associated physical disks can appear and become manageable as a single and seemingly continuous space. Subsequently a storage administrator creates a virtual disk based on a storage pool, configure a storage layout which is essentially a RAID level, and expose the storage of the virtual disk as a drive letter or a mapped folder in Windows Explorer.
With multiple disks presented collectively as one logical entity, i.e. a storage pool, Windows Server 2012 can act as a RAID controller for configuring a virtual disk based on the storage pool as software RAID. However the scalability, resiliency, and optimization that the Storage Space subsystem delivers are much more than just what a software RAID offers. Therefore Windows Server 2012 presents Storage Space subsystem as a set of natively supported storage virtualization and optimization capabilities and not just software RIAD, per se. I am using the term, software RAID, here to convey a well-known concept, however not as an equivalent set of capabilities to those of Storage Space subsystem.
This is an abstraction to present specified storage capacity of a group of physical disks as if the capacity is from one logical entity called a storage pool. For instance, by grouping four physical disks each with 500 GB raw space into a storage group, Storage Space subsystem enables a system administrator to configure the 2 TB capacity (collectively from four individual physical disks) as one logical, seemingly continuous, storage without the need to directly manage individual drives. Storage Space shields the physical characteristics and presents selected storage capacity as pools in which a virtual disk can be created with a specify storage layout (i.e RIAD level) and provisioning scheme, and exposed to Windows Explorer as a drive or a mapped folder. for consumption. The follow schematic illustrates the concept.
A storage pool can consist of heterogeneous physical disks. Notice that a physical drive in the context of Windows Server 2012 Storage Space is simply raw storage from a variety of types of drives including USB, SATA, and SAS drives as well as an attached VHD/VHDX file as shown below. With a storage pool, Windows Server 2012 presents the included physical drives as one logical entity. The allocating the capacity of a storage pool is to first create a virtual disk based on the storage pool followed by creating and mapping a volume Later a drive letter or an empty folder. And with the mapping, the volume based on a virtual disk of a storage pool will appear and work just like a conventional hard drive or folder in Windows Explorer.
The process to create a storage pool is straightforward with the UI from Server Manager/File and Storage Services/Volumes/Storage Pools. Over all, first group all intended physical disks into a storage group. Create a virtual disk based on the storage group. Then create volume based on the virtual disk and map the volume to a drive letter or an empty folder. At this time, the mapped drive letter or folder becomes available in Windows Explorer. By organizing physical disks into a storage pool, simply add disks to as needed expand the physical capacity of the storage pool. A typical routine to configure a storage pool as software RAID in Server Manager includes:
In step 4, two storage provisioning schemes are available. Shown as below, Thin provisioning of a virtual disk optimizes the utilization of available storage in a storage pool via over-subscribing capacity with just-in-time allocation. In other words, the pool capacity used by a virtual disk with Thin provisioning is according to only the size of the files on the virtual disk, and not the defined size of the virtual disk. While Thin provisioning offers flexibility and optimization, the other virtual disk provisioning scheme, Fixed, is to acquire specified capacity at disk creation time for best performance.
While creating a virtual disk based on a storage pool form Server Manager/File and Storage Services/Volumes/Storage Pools, there are three levels of software RAID available as illustrated below. These RAID settings are presented as options of Windows Server 2012 Storage Layout including:
A storage administrator can configure storage virtualization, namely storage pools, virtual disks, etc., of local and remote servers with either Server Manager/Volumes/Storage Pools interface, PowerShell, or even Disk Manager. The following is a screen capture of a configured storage pool with a 6 TB virtual disk with RAID 5 level mounted on the S drive. And in case some wonder, no, I did not have 6 TB storage capacity and it was done by Thin provisioning to over-subscribe what the physical disks were actually offering.
Call to Action
We’re here for Part 5 of the Building a Private Cloud with System Center 2012 SP1 series, and in today’s episode Keith Mayer and Yung Chou show us how to configure network fabric in Virtual Machine Manager (VMM). Tune in for this demo heavy session as they show us how to architect our physical networks and bring them into VMM as they help us define our logical and VM networks and then create and assigning our switches and gateways.