This is a follow-up posting and a continual discussion of desktop virtualization and Remote Desktop Services (RDS) relevant to Windows 7 and Windows Server 2008 R2 (WS2008R2). I highly recommend those who are not familiar with RDS taking a moment to review the architecture and know what role RDWA, RDG, RDSH, RDVH, and RDCB each is playing in serving a remote access request. Which will facilitate one’s understanding of the integration between RDS and VDI, and sets the stage for the next level of discussion in my upcoming post to go over the nuts and bolts of building a VDI solution. I wrote this article with the following logical flow in mind:
What It Is
A centralized desktop delivery solution, Microsoft Virtual Desktop Infrastructure (VDI) is. The concept of VDI is to store and run desktop workloads including a Windows client operating system, applications, and data in a server-based virtual machine (VM) in a data center and allow a user to interact with the desktop presented onto a user device via Remote Desktop Protocol (RDP). Notice VDI is part of an enterprise’s cohesive, holistic virtualization strategy across IT infrastructure to support Microsoft’s vision of Dynamic IT. VDI is not an isolated architecture, but one of the many technologies available to optimize enterprise desktops.
A noticeable component in the Remote Desktop Services (RDS) of WS2008R2 is the availability of Remote Desktop Connection Broker (RDCB). RDCB is a native VDI connection broker to provide a unified experience for access to VDI as well as traditional session-based remote desktops. With RDCB, virtual desktops are now delivered similar to RemoteApp. For example, a user will access http://rds-all.contoso.corp/rdweb and be presented with a webpage with authorized applications and desktops, once authenticated, as shown below.
Here, three Office 2007 applications are published as RemoteApp which works very much the same with that in Windows Server 2008. In Windows Server 2008 R2 however, RemoteApp programs shown on this consistent URL can be composed from multiple sources. The RemoteApp programs shown here are not necessarily installed on the same Remote Desktop Session Host (RDSH) or Terminal Server. They can be from multiple RDSHs and Terminal Servers, yet composed and presented with the same URL. Further, the presence of a RemoteApp program is based on the access control list of a published application in RDSH. By default, all authenticated users will have access to published RemoteApp programs.
The icon, My Desktop, appears for only those who are assigned with a personal virtual desktop. The assignment can be done in RDCB, or the User object in Active Directory. When a user click My Desktop icon, a virtual desktop will be delivered to the user’s device, once the user is authenticated. The follow screen capture shows Word 2007 accessed as a RemoteApp program and a virtual desktop delivered via VDI to a user on a non-managed Windows 7 client.
The icon, Contoso Desktop, is for accessing a virtual desktop running on a VM dynamically picked from a VM pool defied in RDCB. Notice once a VM pool is defined, the icon to access a VM in the pool will show up on the RDS webpage for all authenticated users, regardless if a user has access to the pool. Both the display name of the page and the display name of the icon to access a VM pool can be easily customized in RDCB, here “Contoso Wonder LAN” and “Contoso Desktop” are both customized display names. Further information of the RDS architecture and how RDCB plays a central role in a VDI solution is available in “Remote Desktop Services (RDS) Architecture Explained.”
RemoteApp and Desktop Connection
A new feature in WS2008R2 worth mentioning here is RemoteApp and Desktop Connection which provides the ability to access to RemoteApp programs, remote desktops, and virtual desktops from the Start menu of a Windows 7 PC. In Windows 7, a user can go to Control Panel to configure it with a few mouse clicks in a friendly wizard-driven process. The URL of an intended RDS webpage and user credentials of an intended user are needed to complete the process. When RemoteApp and Desktop Connection accessing a target RDS webpage on a user’s behalf, the user will be prompted for credentials. The screen capture on the right shows the Widows 7 Start menu integrated with RDS resources published on the Contoso Wonder LAN page shown earlier. If the user deletes the settings configured in RemoteAll and Desktop Connection, the Contoso Wonder LAN and its content will be removed accordingly.
To facilitate RDS/VDI deployment, an enterprise administrator can create and distribute a client configuration (.wcx) file to a user to facilitate configuring RemoteApp and Desktop Connection. Another way is to distribute a script to run the client configuration file silently, so that RemoteApp and Desktop Connection is set up automatically when a user logs on to their account on a Windows 7 computer. The automation can be easily done, minimize operator intervention, and provide a great user experience.
With RemoteApp and Desktop Connection, a Windows 7 user can access RemoteApp programs and virtual desktops directly from the Start menu without the need to specify the RDS URL. This minimizes the user training and offers a consistent user experience on using Windows applications.
How It Works
With VDI, a virtual desktop is isolated from the client’s device and runs in a VM maintained in a data center. Here the device can be a desktop, laptop or thin client. A VDI user interacts with one’s virtual desktop through RDP which provides a rich desktop experience. Similar to session-based remote desktops (formerly known as Terminal Services), VDI provides a server session with a full-fidelity desktop environment that is virtualized within a server-based hypervisor. The premise on VDI is that all VDI users are running virtual desktops on VMs. Key technical components making VDI a reality include:
In a VDI deployment, there are two models: (1) a static or persistent virtual desktop and (2) a dynamic or non-persistent one. In static mode, there is a one-to-one mapping of VMs to users. Each user is assigned with a designated VM. Since VMs are commonly stored on a Storage Area Network (SAN) and execute on a server, a larger number of users will likely lead to significant SAN requirements.
In a dynamic architecture, on the other hand, there is only one master image of the desktop stored. All user personalization, profile, applications, etc. are stored separately from the desktop. When a user requests a desktop, a VM cloned from the master image is combined with the user’s personal data and applications dynamically delivered to the user device based on roaming profiles and App-V. This delivers a personalized desktop experience by dynamically provisioning a base image. it simplifies the overall VM management by reducing the number of desktop images maintained.
Both RDS and VDI are core components of desktop virtualization, and they satisfy specific computing requirements and scenarios with deployment readiness and flexibility. For a remote task worker who needs to access a specific application for carrying out a well-defined task like entering data or reporting a status for time reporting, inventory update, or incident reports, etc. RemoteApp may be sufficient. A knowledge worker, on the other hand, who performs complex or unstructured routines like analyzing data, architecting a solution, design a product, writing code, troubleshooting system, etc. will likely require full access to a desktop to assure productivity, and deploying a virtual desktop is one solution.
Notice that VDI, while flexible, does require more server hardware resources than the traditional session-based remote desktop approach. In general, VDI requires an upfront investment in server and storage hardware to store and execute all needed VMs. To ensure users able to access virtual desktops, the network supporting VDI needs highly available since for a user, no network connectivity, no virtual desktop accessible. Generally speaking, the network bandwidth requirement is also expected relatively higher to support VDI than that supports Terminal Services. Virtual machine management software is also essential to manage enterprise virtual desktops, i.e. VMs, running in hypervisor hosts. On user experience, one should not expect a remote desktop or a virtual desktop to perform exactly as well as a locally installed desktop. Audio, video, and USB performance on a remote desktop may not be as rich as those directly running on or attaching to a user’s device. The fact is a rich client will always provide a superior user experience to that delivered with VDI. Overall, considerations of a Microsoft VDI solution should include, but not be limited to:
VDI essentially delivers a desktop on demand to a user device via a network connection. This is different from running a conventional desktop machine with which an OEM license is bound to hardware and cannot be dynamically assigned as VDI does. The traditional licensing has become insufficient to correctly reflect the number of licenses consumed in a desktop deployment delivered with VDI.
To accommodate new deployment scenarios, Microsoft has introduced two new offerings for VDI: Microsoft Virtual Desktop Infrastructure Standard Suite (VDI Standard Suite) and Microsoft Virtual Desktop Infrastructure Premium Suite (VDI Premium Suite). Both the VDI Standard Suite and the VDI Premium Suite are licensed per client device that accesses VDI environment, and thereby allow for flexibility of server infrastructure design and growth. Additional information on Remote Desktop Services Licensing is available.
RDS vs. VDI
Like many solutions, there are pros and cons in employing RDS or VDI, as shown below. And in my view, just like the debates on ”thick client vs. thin client” and “in the cloud vs. on premises,” I have no doubt there will also be a mix of the two, RDS and VDI, in enterprise IT in a foreseeable future. I believe what we must recognize is that business requirements should dictate a solution chosen.
Since virtual desktops delivered by VDI are VMs running in a data center, enterprise IT can realize all the benefits of centralized desktop management. Strategically, VDI enables enterprise IT to
VDI is not for every user but provides deployment readiness and flexibility for specific scenarios including:
Best Practices for VDI
Segment desktop users and categorize user requirements to better understand user scenarios. Assess who can benefit from centralized desktops, and with what kind of business benefits.
Centralizing desktops can be implemented using RDS, VDI, or a combination of the two. And user requirements should determine which is best fit.
Separate applications from desktop image, dynamically provision desktop applications based on user, and minimize the number of desktop image. One solution is to employ Microsoft App-V/TS or App-V for Terminal Services with a VDI solution. Further discussion of App-V/TS will be in my upcoming blog and beyond scope of this article.
We must be aware that running virtual desktops does not eliminate licenses or IT management costs. And it may be a challenge to prove the TCO reduction with an emerging technology like VDI which uplifts IT’s capabilities to a new dimension by fundamentally changes how desktops and applications can be deployed and managed like a service using virtualization.
“Service” sometimes can be a very scary term. For decades, enterprise IT has been delivering services to its customers. Today, we are still learning and debating how to quantify and put a business value to IT services. VDI, in my view, is a service and I am almost hearing “everything as a service” now. To ensure a success and realize business benefits of a VDI solution, a baseline is integral and should be first established. As discussed earlier, VDI works well for some scenarios, and there are times VDI may not be the most cost-effective way, nevertheless it is a solution with most predictability to succeed. The key is to be clear on what a VDI solution is trying to achieve and, as critical, identify: what to measure, where to draw a line, and on which direction an organization is heading. Although it sounds a common sense and like project management 101, in a VDI project basics are critical. And I here predict:
I have already seen VDI and other virtualization technologies like App-V and RDS bringing new opportunities and challenges to many of us. Going forward I believe VDI will continue having an impact on how you, I, and organizations perceive IT and carry out an IT business. As cliché as it sounds, this is an IT transformation from an infrastructure-focused deployment to physical devices into a dynamic and user-centric approach with virtual desktops. Perhaps, this is what I am really saying:
In Windows Server 2008 R2 (WS2008R2), Terminal Services (TS) has been expanded and renamed to Remote Desktop Services (RDS). RDS is the backbone of Microsoft's VDI solutions. And in Windows Server 2012, RDS is further enhanced and with a scenario-based configuration wizard. Still the concept and architecture remain very much the same since WS2008R2. The new and enhanced architecture takes advantage of virtualization and makes remote access a much flexible solution with new deployment scenarios. To realize the capabilities of RDS, it is essential to understand the functions of key architectural components and how they complement one another to process a RDS request. There are many new terms and acronyms to get familiar with in the context of RDS. For the remainder of this post, notice RDS implies the server platform of WS2008R2 and later, while TS implies WS2008.
There are five main architectural components in RDS, as shown, and all require a RDS licensing server. Each component includes a set of features designed to achieve particular functions. Together, the five form a framework for accessing Terminal Services applications, remote desktops, and virtual desktops all with WS2008R2 capabilities. Essentially, WS2008R2 offers a set of building blocks with essential functions for constructing enterprise remote access infrastructure.
To start, a user will access a RDS webpage by specifying an URL where RDS resources are published to. This interface, provided by Remote Desktop Web Access (RDWA) and configured with a local IIS with SSL, is the web access point to RemoteApp and VDI. The URL is consistent regardless how resources are organized, composed, and published from multiple RDS session hosts behind the scene. By default, RDS publishes resources at https://the-FQDN-of-a-RDWA-server/rdweb and this URL is the only information a system administrator needs to provide to a user for accessing authorized resources via RDS. A user will need to be authenticated with one’s AD credentials when accessing the URL and the RemoteApp programs presented by this URL is trimmed with access control list. Namely, an authenticated user will see and be able to access only authorized RemoteApp programs.
Remote Desktop Gateway (RDG) is optional and functions very much the same with that in TS. A RDG is to be placed at the edge of a corporate network to filter out incoming RDS requests by referencing criteria defined in a designated Network Policy Server (NPS). With a server certificate, RDG offers secure remote access to RDS infrastructure. As far as a system administrator is concerned, RDG is the boundary of a RDS network. There are two policies in NPS relevant to an associated RDG:
In RDS, applications are installed and published in a Remote Desktop Session Host (RDSH) similar to a TS Session Host, or simply a Terminal Server in a TS solution. A RDSH loads applications, crunches numbers, and produces results. It is our trusted and beloved working horse in a RDS solution. Digital signing can be easily enabled in a RDSH with a certificate. Multiple RDSHs can be deployed along with a load balancing technology. Which requires every RDSH in a load-balancing group to be identically configured with the same applications.
A noticeable enhancement in RDSH (as compared with TS Session Host) is the ability to trim the presence of a published application based on the access control list (ACL) of the application. An authorized user will see, hence have an access to, only published applications of which the user is included in the ACL. By default, the Everyone group is authorized in a published application’s ACL, and all connected user will have access to a published application.
Remote Desktop Virtualization Host (RDVH) is a new feature which serves requests for virtual desktops running in virtual machines, or VMs. A RDVH server is a Hyper-V based host, for instance a Windows Server with Hyper-V server role enabled. When serving a VM-based request, an associated RDVH will automatically start an intended VM, if the VM is not already running. And a user will always be prompted for credentials when accessing a virtual desktop. However, a RDVH does not directly accept connection requests and it uses a designated RDSH as a “redirector” for serving VM-based requests. The pairing of a RDVH and its redirector is defined in Remote Desktop Connection Broker (RDCB) when adding a RDVH as a resource.
Remote Desktop Connection Broker (RDCB), an expansion of the Terminal Services Session Broker in TS, provides a unified experience for setting up user access to traditional TS applications and virtual machine (VM)-based virtual desktops. Here, a virtual desktop can be running in either a designated VM, or a VM dynamically picked based on load balancing from a defined VM pool. A system administrator will use the RDCB console, called Remote Desktop Connection Manager, to include RDSHs, TS Servers, and RDVHs such that those applications published by the RDSHs and TS Servers, and those VMs running in RDVHs can be later composed and presented to users with a consistent URL by RDWA. And with this consistent URL, authenticated users can access authorized RemoteApp programs and virtual desktops.
A Remote Desktop (RD) Client gets connection information from the RDWA server in a RDS solution. If a RD client is outside of a corporate network, the client connects through a RDG. If a RD client is internal, the client can then directly connect to an intended RDSH or RDVH once RDCB provides the connection information. In both cases, RDCB plays a central role to make sure a client gets connected to a correct resource. With certificates, a system administrator can configure digital signing and single sign-on among RDS components to provide a great user experience with high security.
Conceptually, RDCB is the chief intelligence and operation officer of a RDS solution and knows which is where, whom to talk to, and what to do with a RDS request. Before a logical connection can be established between a client and a target RDSH or RDVH, RDCB acts as a go-between passing and forwarding pertinent information to and from associated parties when serving a RDS request. From a 50,000-foot view, a remote client uses RDWA/RDG to obtain access to a target RDSH or RDVH, while RDCB connects the client to a session on the target RDSH, or an intended VM configured in a target RDVH. Above is a RDS architecture poster with visual presentation on how all flow together. Http://aka.ms/free has number of free e-books and this poster for additional information of WS2008R2 Active Directory, RDS, and other components.
The configuration in WS2008 is a bit challenging with many details easily overlooked. Windows Server 2012 has greatly improved the user experience by facilitating the configuration processes with a scenario-based wizard. Stay tuned and I will further discuss this in an upcoming blog post series.
Recommended additional reading on RDS/VDI/App-V, cloud essentials, and private cloud
Windows Server 2012 Storage Space subsystem now virtualizes storage by abstracting multiple physical disks into a logical construct with specified capacity. The process is to group selected physical disks into a container, the so-called storage pool, such that the total capacity collectively presented by those associated physical disks can appear and become manageable as a single and seemingly continuous space. Subsequently a storage administrator creates a virtual disk based on a storage pool, configure a storage layout which is essentially a RAID level, and expose the storage of the virtual disk as a drive letter or a mapped folder in Windows Explorer.
With multiple disks presented collectively as one logical entity, i.e. a storage pool, Windows Server 2012 can act as a RAID controller for configuring a virtual disk based on the storage pool as software RAID. However the scalability, resiliency, and optimization that the Storage Space subsystem delivers are much more than just what a software RAID offers. Therefore Windows Server 2012 presents Storage Space subsystem as a set of natively supported storage virtualization and optimization capabilities and not just software RIAD, per se. I am using the term, software RAID, here to convey a well-known concept, however not as an equivalent set of capabilities to those of Storage Space subsystem.
This is an abstraction to present specified storage capacity of a group of physical disks as if the capacity is from one logical entity called a storage pool. For instance, by grouping four physical disks each with 500 GB raw space into a storage group, Storage Space subsystem enables a system administrator to configure the 2 TB capacity (collectively from four individual physical disks) as one logical, seemingly continuous, storage without the need to directly manage individual drives. Storage Space shields the physical characteristics and presents selected storage capacity as pools in which a virtual disk can be created with a specify storage layout (i.e RIAD level) and provisioning scheme, and exposed to Windows Explorer as a drive or a mapped folder. for consumption. The follow schematic illustrates the concept.
A storage pool can consist of heterogeneous physical disks. Notice that a physical drive in the context of Windows Server 2012 Storage Space is simply raw storage from a variety of types of drives including USB, SATA, and SAS drives as well as an attached VHD/VHDX file as shown below. With a storage pool, Windows Server 2012 presents the included physical drives as one logical entity. The allocating the capacity of a storage pool is to first create a virtual disk based on the storage pool followed by creating and mapping a volume Later a drive letter or an empty folder. And with the mapping, the volume based on a virtual disk of a storage pool will appear and work just like a conventional hard drive or folder in Windows Explorer.
The process to create a storage pool is straightforward with the UI from Server Manager/File and Storage Services/Volumes/Storage Pools. Over all, first group all intended physical disks into a storage group. Create a virtual disk based on the storage group. Then create volume based on the virtual disk and map the volume to a drive letter or an empty folder. At this time, the mapped drive letter or folder becomes available in Windows Explorer. By organizing physical disks into a storage pool, simply add disks to as needed expand the physical capacity of the storage pool. A typical routine to configure a storage pool as software RAID in Server Manager includes:
In step 4, two storage provisioning schemes are available. Shown as below, Thin provisioning of a virtual disk optimizes the utilization of available storage in a storage pool via over-subscribing capacity with just-in-time allocation. In other words, the pool capacity used by a virtual disk with Thin provisioning is according to only the size of the files on the virtual disk, and not the defined size of the virtual disk. While Thin provisioning offers flexibility and optimization, the other virtual disk provisioning scheme, Fixed, is to acquire specified capacity at disk creation time for best performance.
While creating a virtual disk based on a storage pool form Server Manager/File and Storage Services/Volumes/Storage Pools, there are three levels of software RAID available as illustrated below. These RAID settings are presented as options of Windows Server 2012 Storage Layout including:
A storage administrator can configure storage virtualization, namely storage pools, virtual disks, etc., of local and remote servers with either Server Manager/Volumes/Storage Pools interface, PowerShell, or even Disk Manager. The following is a screen capture of a configured storage pool with a 6 TB virtual disk with RAID 5 level mounted on the S drive. And in case some wonder, no, I did not have 6 TB storage capacity and it was done by Thin provisioning to over-subscribe what the physical disks were actually offering.
Call to Action
More hybrid cloud resources at http://aka.ms/all.
You want to download this one!
Exchange Server 2010 Architecture Poster
The steps I followed to add the sidebar to Windows Server 2008 R2 Beta (R2Beta) desktop as shown below are very similar to those documented in Adding Vista Sidebar and Aero to Windows Server 2008 Desktop. There were however some changes needed.
On step 1, the document has Vista x64 code copied into Windows Server 2008. Here I copied Windows 7 x64 code into R2Beta. This should be obvious.
On step 5, running sidebar.exe in R2Beta did not bring up the sidebar. And unlike Windows 7 Beta, right-clicking the mouse on R2Beta desktop does not offer a Gadgets option to lead to the sidebar. To bring up the sidebar on R2Beta desktop at this time, one can install a saved gadget file if it is readily available. If not, first go to online Vista Gadget Gallery, download a gadget, not to install since that will fail with the error message below, but to save it to a local folder.
I simply downloaded a few gadgets and saved them in C:\Program Files\Windows Sidebar\Shared Gadgets. I then double-clicked the saved files to install the gadgets. This loaded the sidebar to the desktop. Once the sidebar was brought up, I then customized the installed gadgets like how it’s done in Vista.
There was still some loose end. After a reboot, the sidebar would not show up on the R2Beta desktop. I rebooted a few times and the sidebar simply would not show up. I realized reloading a gadget would refresh the sidebar. So using the Task Manager I set up a task as shown on the right to run at logon time to reload a gadget, this should then bring up the sidebar every time I log on. And it worked as expected.
My objective was simple, to get the sidebar to show up on R2Beta desktop. I am not sure this is the optimal way to add the sidebar to R2Beta, and I prefer not to run a task at logon time to bring up the sidebar automatically, nevertheless it gets the job done. If anyone out there knows a better way to do this, I would really love to hear it.
By this time, I assume we all have some clarity that virtualization is not cloud. There are indeed many and significant differences between the two. A main departure is the approaches of deploying apps. In the 3rd article of the 5-part series as listed below, I would like to examine service-based deployment introduced in VMM 2012 for building a private cloud.
VMM 2012 has the abilities to carry out both traditional virtual machine (VM)-centric and emerging service-based deployments. The formal is virtualization-focused and operated at a VM level, while the latter is service-centric approach and intended for private cloud deployment.
This article is intended for those with some experience administering VMM 2008 R2 infrastructure. And notice in cloud computing, “service” is a critical and must-understand concept which I have discussed elsewhere. And just to be clear, in the context of cloud computing, a “service” and an “application” means the same thing, since in cloud everything to user is delivered as a service, for example SaaS, PaaS, and IaaS. Throughout this article, I use the terms, service and application, interchangeably.
In virtualization, deploying a server has becomes conceptually shipping/building and booting from a (VHD) file. Those who would like to refresh their knowledge of virtualization are invited to review the 20-Part Webcast Series on Microsoft Virtualization Solutions.
Virtualization has brought many opportunities for IT to improve processes and operations. With system management software such as System Center Virtual Machine Manager 2008 R2 or VMM 2008 R2, we can deploy VMs and installs OS to a target environment with few or no operator interventions. And from an application point of view, with or without automation the associated VMs are essentially deployed and configured individually. For instance, a multi-tier web application like the one shown above is typically deployed with a pre-determined number of VMs, followed by installing and configuring application among the deployed VMs individually based on application requirements. Particularly when there is a back-end database involved, a system administrator typically must follow a particular sequence to first bring a target database server instance on line by configuring specific login accounts with specific db roles, securing specific ports, and registering in AD before proceeding with subsequent deployment steps. These operator interventions are required likely due to lack of a cost-effective, systematic, and automatic way for streamlining and managing the concurrent and event-driven inter-VM dependencies which become relevant at various moments during an application deployment.
Despite there may be a system management infrastructure in place like VMM 2008 R2 integrated with other System Center members, at an operational level VMs are largely managed and maintained individually in a VM-centric deployment model. And perhaps more significantly, in a VM-centric deployment too often it is labor-intensive and with relatively high TCO to deploy a multi-tier application “on demand” (in other words, as a service) and deploy multiple times, run multiple releases concurrently in the same IT environment, if it is technically feasible at all. Now in VMM 2012, the ability to deploy services on demand, deploy multiple times, run multiple releases concurrently in the same environment become noticeably straightforward and amazing simple with a service-based deployment model.
In a VM-centric model, there lacks an effective way to address event-driven and inter-VMs dependencies during a deployment, nor there is a concept of fabric which is an essential abstraction of cloud computing. In VMM 2012, a service-based deployment means all the resources encompassing an application, i.e. the configurations, installations, instances, dependencies, etc. are deployed and managed as one entity with fabric . The integration of fabric in VMM 2012 is a key delivery and clearly illustrated in VMM 2012 admin console as shown on the left. And the precondition for deploying services to a private cloud is all about first laying out the private cloud fabric.
To deploy a service, the process normally employs administrator and service accounts to carry out the tasks of installing and configuring infrastructure and application on servers, networking, and storage based on application requirements. Here servers collectively act as a compute engine to provide a target runtime environment for executing code. Networking is to interconnect all relevant application resources and peripherals to support all management and communications need, while the storage is where code and data actually resides and maintained. In VMM 2012, the servers, networking, and storage infrastructure components are collectively managed with a single concept as private cloud fabric.
There are three resource pools/nodes encompassing fabric: Servers, Networking, and Storage. Servers contain various types of servers including virtualization host groups, PXE, Update (i.e. WSUS) and other servers. Host groups are container to logically group servers with virtualization hosting capabilities and ultimately represent the physical boxes where VMs can be possibly deployed to, either with specific network settings or dynamically selected by VMM Intelligent Placement, as applicable, based on defined criteria. VMM 2012 can manage Hyper-V based, VMware, as well as other virtualization solutions. During adding a host into a host group, VMM 2012 installs an agent on a target host which then becomes a managed resource of the fabric.
A Library Server is a repository where the resources for deploying services and VMs are available via network shares. As a Library Server is added into fabric, by specifying the network shares defined in the Library Server, file-based resources like VM templates, VHDs, iso images, service templates, scripts, server app-v packages, etc. are become available and to be used as building blocks for composing VM and service templates. As various types of servers are brought in the Server pool, the coverage expanded and capabilities increased as if additional fibers are weaved into fabric.
Networking presents the wiring among resources repositories, running instances, deployed clouds and VMs, and the intelligence for managing and maintaining the fabric. It essentially forms the nervous system to filter noises, isolate traffic, and establish interconnectivity among VMs based on how Logical Networks and Network Sites are put in place.
Storage reveals the underlying storage complexities and how storage is virtualized. In VMM 2012, a cloud administrator can discover, classify and provision remote storage on supported storage arrays through the VMM 2012 console. VMM 2012 fully automates the assignment of storage to a Hyper-V host or Hyper-V host cluster, and tracks the storage that is managed by VMM 2012.
Deploying Private Cloud
A leading feature of VMM 2012 is the ability to deploy a private cloud, or more specifically to deploy a service to a private cloud. The focus of this article is to depict the operational aspects of deploying a private cloud with the assumption that an intended application has been well tested, signed off, and sealed for deployment. And the application resources including code, service template, scripts, server app-v packages, etc. are packaged and provided to a cloud administrator for deployment. In essence, this package has all the intelligence, settings, and contents needed to be deployed as a service. This self-contained package can then be easily deployed on demand by validating instance-dependent global variables and repeating the deployment tasks on a target cloud. The following illustrated the concept where a service is deployed in update releases and various editions with specific feature compositions, while all running concurrently in VMM 2012 fabric. Not only this is relative easy to do by streamlining and automating all deployment tasks with a service template, the service template can also be configured and deploy to different private clouds.
The secret sauce is a service template which includes all the where, what, how, and when of deploying all the resources of an intended application as a service. It should be apparent that the skill sets and amount of efforts to develop a solid service template apparently are not trivial. Because a service template not only needs to include the intimate knowledge of an application, but the best practices of Windows deployment in addition to system and network administrations, server app-v, and system management of Windows servers and workloads. The following is a sample service template of StockTrader imported into VMM 2012 and viewed with Designer where StockTrader is a sample application for cloud deployment downloaded from Windows Connect.
Here are the logical steps I follow to deploy StockTrader with VMM 2012 admin console:
A successful deployment of Stock Trader with minimal instances in my all-in-one-laptop demo environment (running in Lenovo W510 with sufficient RAM) took about 75 to 90 minutes as reported in Job Summary shown below.
Once the service template is successfully deployed, Stock Trader becomes a service in the target private cloud supported by VMM 2012 fabric. The following two screen captures show a Pro Release of Stock Trader deployed to a private cloud in VMM 2012 and the user experience of accessing a trader’s home page.
Not If, But When
Witnessing the way the IT industry has been progressing, I envision that private cloud will soon become, just like virtualization, a core IT competency and no longer a specialty. While private cloud is still a topic that is being actively debated and shaped, the upcoming release of VMM 2012 just in time presents a methodical approach for constructing private cloud based on a service-based deployment with fabric. It is a high-speed train and the next logical step for enterprise to accelerate private cloud adoption.
I here forecast the future is mostly cloudy with scattered showers. In the long run, I see a clear cloudy day coming.
Be ambitious and opportunistic is what I will encourage everyone. When it comes to Microsoft private cloud, the essentials are Windows Server 2008 R2 SP1 with Hyper-V and VMM 2012. And those who first master these skills will stand out, become the next private cloud subject matter experts, and lead the IT pro communities. While recognizing private cloud adoption is not a technology issue, but a culture shift and an opportunity of career progression, IT pros must make a first move.
In an upcoming series of articles tentatively titled “Deploying StockTrader as Service to Private Cloud with VMM 2012,” I will walk through the operations of the above steps and detail the process of deploying a service template to a private cloud.
[To Part 1, 2, 3, 4, 5]
Reference: Hyper-V Replica Broker
Windows Server 2012 Hyper-V Role introduces a new capability, Hyper-V Replica, as a built-in replication mechanism at a virtual machine (VM) level. Hyper-V Replica can asynchronously replicate a selected VM running at a primary site to a designated replica site across LAN/WAN. The following schematic presents this concept.
Here both a primary site and a replica site are Windows Server 2012 Hyper-V hosts where a primary site runs production or the so-called primary VMs, while a replica site is standing by with replicated VMs off and to be brought online, should the primary site experiences a planned or unplanned VM outage. Hyper-V Replica requires neither shared storage, nor a specific storage hardware. Once an initial copy is replicated to a replica site and replication is ongoing, Hyper-V Replica will replicate only the changes of a configured primary VM, i.e. the deltas, asynchronously.
Once all the requirements are put in place, first configure a target replica server, followed by configure an intended VM at a primary site for replication. The following is a sample process to establish Hyper-V Replica:
Identify a Windows Server 2012 Hyper-V host as the replica site and enable Hyper-V Replica in the Hyper-V settings on the host in Hyper-V Manager. The following are Hyper-V Replica sample settings of a replica site, Development.
Using the Hyper-V Manager of a primary site, right-click an intended VM to enable Hyper-V Replica by walking through the wizard to establish a replication relationship. Below shows how to enable Hyper-V Replica of A-Selected-VM at the primary site, VDI.
As applicable, carry out the initial replication of a target workload. If an initial copy is to be sent out over the network, this will happen in real-time or according to a schedule at the end of step 2.
Conduct a Test Failover event from the replica site by right-clicking the replicated VM and select the option to confirm the readiness of accepting a replication request and process the traffic.
Conduct a Planned Failover event from the primary site after shutting down an intended source VM as shown below.
The following information returns upon a successful execution a Planned Failover event of A-Selected-VM from a primate site where VDI as the source Hyper-V host to a replica site where Development as the destination host, for example.
Notice that successfully performing a Planned Failover will automatically establish a reverse replication relationship where then the before replica site (Development) becomes the current primary site and the before primary site (VDI) becomes the current replica site and start the primary VM. The Replication Health, accessible by right-clicking a VM with Hyper-V Replica enabled in Hyper-V Manager, reveals the current replication relationship. The following shows Replication Health information of the VM, VDI, before and after successfully performing a planned failover with a reverse replication relationship automatically set.
In the event that a source VM experiences an unexpected outage at a primary site, it is necessary to manually start the replicated VM at the replica site, while in this scenario a reverse replication relationship will not be automatically established due to some un-replicated changes may have lost along with the unexpected outage.
Conduct another Planned Failover event to confirm that the reverse replication works. In the presented scenario, the planned failover will be now from Development back to VDI. And upon successful execution of the planned failover event, the resulted (i.e. reversed) replication relationship should be VDI as a primary site with Development as the replica site. Which is the same state at the beginning of step 5.
1. Test Hyper-V Replica against VM mobility scenarios including Live Migration of Hyper-V Replica-enabled VMs and related storage among cluster nodes, SMB 3.0 shares, file servers, etc. This is where network virtualization (presented later in this book) becomes critically needed to provide transparency of network configurations at a VM level to minimize the technical complexities of relocating a VM or its associated storage.
Incorporate Hyper-V Replica configurations and maintenance into applicable corporate IT standard operating procedures and start monitoring and maintaining the health of Hyper-V Replica resources.
The replication process will replicate, i.e. create an identical VM in the Hyper-V Manager of the replica server and subsequently the change tracking module of Hyper-V Replica will track and replicate the write-operations in the source VM every a few minutes after the last successful replication regardless the associated vhd files are hosted in SMB shares, Cluster Shared Volumes (CSVs), SANs, or with directly attached storage devices.
Importantly, if to employ a Hyper-v failover cluster as a replica site, must use Failover Cluster Manager to perform all Hyper-V Replica configurations and management. And first create a Hyper-V Replica Broker role, as demonstrated below.
A Hyper-V Replica Broker is the focal point in this case. It queries the associated cluster database to realize which node is the correct one to redirect VM specific events such as Live Migration requests in a replica cluster.
Windows Active Directory domain is not a requirement for Hyper-V Replica which can also be implemented between workgroups and untrusted domains with a certificate-based authentication. Active Directory is however a requirement if involving a Hyper-v host which is part of a failover cluster, and in such case all Hyper-V hosts of a failover cluster must be in the same Active Directory domain with security enforced at the cluster level.
In a BC scenario and a planned failover event of a primary VM, Hyper-V Replica will first copy any un-replicated changes to the replica VM, such that the event produces no loss of data. Once the planned failover is completed, the replica VM will then become the primary VM and carry the workload, while a reverse replication is automatically set. In a DR scenario, i.e. an unplanned outage of a primary VM, an operator will need to manually bring up the replicated VM with an expectation of some data loss, since changes of the primary VM not yet replicated to the replicated VM have now been lost along with the unplanned outage.
Do the following steps in your target Windows Server 2008 x64 box.
To get the sidebar,
Now to get the Aero, assuming your graphic card is sufficient
Then spend the next hour to add/remove all the gadgets you like/dislike, which is the fun part. After all is done, here's what a Windows Server 2008 desktop may look like.
The series focusing on cloud essentials for IT professionals includes:
Cloud computing, or simply cloud, is changing how IT delivers services and how a user can access computing resources at work, from home, and on the go. Cloud enables IT to respond to business opportunities with on-demand deliveries that are cost-effective and agile in the long run. Much happening in enterprise IT now is a journey to transform existing IT establishment into a cloud-friendly, cloud-ready, cloud-enabled environment. To start off, there are key concepts we, as IT pros, must grasp to fully appreciate the transformation that is going on and forward.
What Is Service
In the context of IT, “service” is a term frequently used to describe a form of delivery or availability. In a Windows machine, for example, core services to authenticate users and process commands automatically start and run behind the scene to provide essential functions for running a desktop session. In the context of cloud computing, I simply explain a service as something delivered “on demand.” Namely, a computing resource delivered as a “service” is available on demand to an authorized user. Specifically in cloud computing, “on-demand” also carries additional connotations.
On-demand in the context of cloud computing suggests that how a resource is made available is transparent and not a concern of a subscriber. It implies computing capacities can be adjusted dynamically according to demands. In other words, a subscriber can increase the capacities as needed and decrease them when no longer required. On-demand also means there is a business model in place to support “pay as you go” and “pay according to how much you have consumed.” In a production environment, there may be administrative as well operational constraints on how much and how fast a subscriber can change the resource allocations. This can and should be negotiated and stated in a service level agreement between a subscriber and a service provider. Conceptually, a service delivered through cloud is a set of computing resources available, scalable, and consumable based upon demands. On-demand essentially conveys the characteristics of Cloud.
Characteristics of Cloud Computing
Cloud similar to many IT terms like: database, networking, security, collaboration, portal, workspace, etc. is something that too often means different things to different people. Accessing your company’s application via Internet, is that Cloud Computing? Employing VPN to authenticate into your private network, is that Private Cloud? Is remote access considered some form of Cloud Computing? These questions may seem trivial, yet they are fundamental to preclude ambiguity, uncertainty, and uneasiness when we are facing changes and transitioning from an infrastructure-focused deployment to a service-centric, i.e. Cloud, deliveries. For technical professionals, Cloud may mean: utility computing, high speed grids, virtualization, automatic configuration and deployment, on-demand and remote processing, and combinations of them. For non-technical users, Cloud is simply the Internet, a cable form a service provider, or just something out there networked with my computer. Either public, private, or in between, the conventional wisdom, as published in The NIST Definition of Cloud Computing, assumes noticeable characteristic regarding how computing resources are made available in Cloud including:
And realize that based upon a delivery model, these characteristics apply to different user experience. For instance, on-demand self-service may imply the ability to: acquire an account and create a user profile as in SaaS, code and publish an application in PaaS, or configure and deploy a virtual machine in IaaS. This may not as apparent without a clear understanding on how services are deployed and delivered in cloud.
[To Part 1, 2, 3, 4, 5, 6]
For discussing cloud computing, I recommend employing the following theories as a baseline.
Theory 1: You can not productively discuss cloud computing without first clearly defining what it is.
The fact is that cloud computing is confusing since everyone seems to have a different definition of cloud computing. Notice the issue is not lack of definitions, nor the need for having an agreed definition. The issue is not having a well-thought-out definition to operate upon. And without a good definition, a conversation of cloud computing all too often becomes non-productive since cloud computing touches infrastructure, architecture, development, deployment, operations, automation, optimization, manageability, cost, and very aspect of IT. And as explained below, it is indeed a generational shift of our computing platform from desktop to cloud. Without a good baseline of cloud computing, a conversation of the subject results in nothing more than an academic exercise in my experience.
Theory 2: The 5-3-2 principle defines the essence and scopes the subject domain of cloud computing.
Employ the 5-3-2 principle as a message framework to facilitate the discussions and improve the awareness of cloud computing. The message of cloud computing itself is however up to individuals to articulate. Staying with this framework will keep a cloud conversation aligned with the business values which IT is expected to and should deliver in a cloud solution.
Theory 3: The 5-3-2 principle of cloud computing describes the 5 essential characteristics, 3 delivery methods, and 2 deployment models of cloud computing.
The 5 characteristics of cloud computing, shown below, are the expected attributes for an application to be classified as a cloud application. These are the differentiators. Questions like “I am running X, do I still need cloud?” can be clearly answered by determining if these characteristics are expected for X.
The 3 delivery methods of cloud computing, as shown below, are the frequently heard: Software as a Service, Platform as a Service, and Infrastructure as a Service, namely SaaS, PaaS, and IaaS respectively. Here, the key is to first understand “what is a service.” All 3 delivery methods are presented as services in the context of cloud computing. Without a clear understanding of what is service, there is a danger of not grasping the fundamentals as to misunderstand all the rest.
The 2 deployment methods of cloud computing are public cloud and private cloud. Public cloud is intended for public consumption and private cloud is a cloud (and notice a cloud should exhibit the 5 characteristics) while the infrastructure is dedicated to an organization. Private cloud although frequently assumed inside a private data center, as depicted below, can be on premises or hosted off premises by a 3rd party. Hybrid deployment is an extended concept of a private cloud with resources deployed on-premise and off-premise.
The 5-3-2 principle is a simple, structured, and disciplined way of conversing cloud computing. 5 characteristics, 3 delivery methods, and 2 deployment models together explain the key aspects of cloud computing. A cloud discussion is to validate the business needs of the 5 characteristics, the feasibility of delivering an intended service with SaaS, PaaS, or IaaS, and if public cloud or private cloud the preferred deployment model. Under the framework provided by the 5-3-2 principle, now there is a structured way to navigate through the maze of cloud computing and offer a direction to an ultimate cloud solution. Cloud computing will be clear and easy to understand with the 5-3-2 principle as following:
New and flexible ways to make changes of a Windows Server 2012 installation after the fact are available. And IT pros can now convert a server from and to Server Core, and change the availability of server components that are previously committed at an installation time. This introduces new dynamics and exciting scenarios for improving supportability, efficiency, and security. This article highlights the three available installation options and some key operations based on the Release Candidate, Build 8400. There is additional information of Windows Server 2012 including: a free eBook, available editions, and a reference table summarizing the available features in each installation option available elsewhere.
This is the default and preferred configuration for deploying Windows Server 2012. Server Core was introduced in Windows Server 2008 as a minimal installation option and a low-maintenance environment with limit functionality, while reducing:
It is installed without graphical user interface and with only the binaries required by configured server roles.
Notice that the preference for deploying Server Core of Windows Server 2012 in an enterprise signifies a new OS standard with improved user experience and supportability while still offering the above mentioned key attributes for private cloud computing. And the growing number of Server Core instances in production also suggests even higher market demands for process automation, remote management, etc. in enterprise IT space. And for IT pros, that means PowerShell scripting, a lot of. Which is why in Windows Server 2012, PowerShell support achieves a critical mass with 2,400+ cmdlets compared to around 230 in the early days of Windows Server 2008.
Server with GUI
This installation option is the familiar one by most IT pros. It installs the user interface and all server tools. The default settings are shown in Figures 1 and 2. In Windows Server 2012, the interface although with Metro-style looks and feels, it however does not support Metro-style applications without adding Desktop Experience feature.
Figure 1. Default GUI Settings of Windows Server 2012 full Installation Shown in Server Manager
Figure 2. Default GUI Settings of Windows Server 2012 full Installation Shown in PowerShell
One important deployment feature in Windows Server 2012 is that the ability to convert from a Server with GUI deployment to a Server Core installation, and vice versa, with PowerShell. This is different from that in Windows Server 2008 release where one cannot change the installation option of a server, once installed. To convert an installation from Server with GUI to Server Core, run the following PowerShell command:
Uninstall-WindowsFeature Server-Gui-Mgmt-Infra -Restart
And the installation will take minute to reconfigure followed by rebooting into Server Core with all settings removed from User Interfaces and Infrastructure as shown in Figure 3 below:
Figure 3. Settings in Server Core of Windows Server 2012 from Removing Server-Gui-Mgmt-Infra
Notice the above converting from a Server with GUI to Server Core installation does not completely remove all the files associated from the local disk. Such that to re-install the GUI components from this state, simply run:
Install-WindowsFeature Server-Gui-Shell -Restart
This will convert this Server Core installation back to Server with GUI with settings shown in Figures 1 and 2. To completely remove all associated files and dependent components of a role or feature, use the –Remove flag. This brings the feature to a state called “disabled with payload removed.” And to reinstall a role or feature disabled with payload removed, one will need to have an installation source and use the –Source for specifying the path. And the component sources must be from the exact same version of Windows for the reinstallation to work. Without the –Source option, PowerShell will use Windows Update by default. This ability to remove and reinstall a component of Windows Server 2012 is presented as “Features on Demand.”
Minimal Server Interface
This is new. In Windows Server 2012, with a Server with GUI installation one can remove the Server Graphical Shell (which provides full GUI for server) to set a full server installation with the so-called Minimal Server Interface option with the following PowerShell comlet.
Unstall-WindowsFeature Server-Gui-Shell -Restart
This basically provides a Server with GUI, but without installing Internet Explorer 10, Windows Explorer, the desktop, and the Start screen. Additionally, Microsoft Management Console (MMC), Server Manager, and a subset of Control Panel are still in place. Minimal Server Interface requires 4 GB more disk space than Server Core alone. Figure 4 shows the user experience of Minimal Server Interface in which the server boots with the shown settings:
Figure 4. User Experience of Minimal Server Interface of Windows Server 2012
Graphical Management Tools and Infrastructure is the set of features providing a minimal server interface for supporting GUI management tool. Uninstalling Graphical Management Tools and Infrastructure at this time will further convert the server to a Server Core installation with the settings in Figure 3.
There is an apparent inheritance relationship between Server-Gui-Mgmt-Infra and Server-Gui-Shell as illustrated in Figure 5.
Figure 5. Dependency Between Server GUI Components in Windows Server 2012
This series was delivered by a team of IT Pro Evangelists including: Kevin Remde, Matt Hester, Chris Avis, Chris Henley, and Yung Chou a while ago. Still the information is relevant to get yourself well informed on the technologies, the solutions, and how to get your IT environment strategically aligned and integrated with virtualization. To facilitate learning Microsoft virtualization technologies, I have also made a number of free eBooks and posters available including Windows Server 2008 R2, Understanding Microsoft Virtualization Solutions, Active Directory and Hyper-V. Additionally, there are also free trainings of virtualization technologies and software evaluation copies of System Center 2012 available.
Regardless your role and responsibilities, session 1 TechNet Webcast: Virtualization in a Nutshell is the one you absolutely do not want to miss. This session gives you an overview of all Microsoft virtualization solutions, so you get the big picture and know the context of a solution. You will know “Why virtualize?” and “Why Microsoft?” This session is to advance and facilitate your understanding on virtualization in general, and help you recognizing a virtualization opportunity when it presents itself.
Technorati Tags: webcast ,virtualization Windows Live Tags: webcast ,virtualization WordPress Tags: webcast ,virtualization
(This is a reposting with validated links of a previously published post at http://aka.ms/yungchou)
Virtualization vs. private cloud has confused many IT pros. Are they the same? Or different? In what way and how? We have already virtualized most of my computing resources, is a private cloud still relevant to us? These are questions I have been frequently asked. Before getting the answers, in the first article of the two-part series listed below I want to set a baseline.
Lately, many IT shops have introduced virtualization into existing computing environment. Consolidating servers, mimicking production environment, virtualizing test networks, securing resources with honey pots, adding disaster recovery options, etc. are just a few applications of employing virtualization. Some also run highly virtualized IT with automation provided by system management solutions. I imagine many IT pros recognize the benefits of virtualization including better utilization of servers, associated savings by reducing the physical footprint, etc. Now we are moving into a cloud era, the question then becomes “Is virtualization the same with a private cloud?” or “We are already running a highly virtualized computing today, do we still need a private cloud?“ The answers to these questions should always start with “What business problems you are trying to address?” Then assess if a private cloud solution can fundamentally solve the problem, or perhaps virtualization is sufficient. This is of course assuming there is a clear understanding of what is virtualization and what is a private cloud. This point is that virtualization and cloud computing are not the same. They address IT challenges in different dimensions and operated in different scopes with different levels of impact on a business.
To make a long story short, virtualization in the context of IT is to “isolate” computing resources such that an object (i.e. an application, a task, a component) in a layer above can be possibly operated without a concern of those changes made in the layers below. A lengthy discussion of virtualization is beyond the scope of this article. Nonetheless,let me point out that the terms, virtualization, and “isolation” are chosen for specific reasons since there are technical discrepancies between “virtualization” and “emulation”, “isolation” and “redirection.” Virtualization isolates computing resources, hence offers an opportunity to relocate and consolidate isolated resources for better utilization and higher efficiency. Virtualization is rooted in infrastructure management, operations, and deployment flexibility. It's about consolidating servers, moving workloads, streaming desktops, and so on; which without virtualization are not technically feasible or may simply be cost-prohibitive.
Cloud computing on the other hand is a state, a concept, a set of capabilities. There are statements made on what to expect in general from cloud computing. A definition of cloud computing published in NIST SP-800-145 outlines the essential characteristics, how to deliver, and what kind of deployment models to be cloud-qualified. Chou further simplifies it and offers a plain and simple way to describe cloud computing with the 5-3-2 Principle as illustrated below.
To realize the fundamental differences between virtualization and private cloud is therefore rather straightforward. In essence, virtualization is not based on the 5-3-2 Principle as opposed to cloud computing does. For instance, a self-serving model is not an essential component in virtualization, while it is essential in cloud computing. One can certainly argue some virtualization solution may include a self-serving component. The point is that self-service is not a necessary , nor sufficient condition for virtualization. While in cloud computing, self-service is a crucial concept to deliver anytime availability to user, which is what a service is all about. Furthermore, self-service is an effective mechanism to in the long run reduce training and support at all levels. It is a crucial vehicle to accelerate the ROI of a cloud computing solution and make it sustainable in the long run.
So what are specifically about highly virtualized computing environment vs. a private cloud?
Building a private cloud is the next step for enterprise IT. A 7-year research with 15,000 data points, as shown below, provides overwhelming evidences and concludes a noticeable increase in efficiency with significant reduction of TCO by moving IT from Basic into Rationalized stage in the IO (Infrastructure Optimization) model. And cloud, or more specifically private cloud computing, offers a roadmap to facilitate enterprise IT transforming from Basic or Standardized into Rationalized and beyond. As users’ expectations of IT deliveries have become IT services instantaneously and continuously ready and available, a transition from a traditional infrastructure-focused deployment model into a cloud-ready, cloud-enabled, and service-centric delivery vehicle in enterprise IT is imminent.
Download the above references: Cost Study and Forrester's paper.
Microsoft private cloud solutions are based on Windows Server (or Windows Hyper-V Server) and System Center. Windows infrastructure and Active Directory have been deployed in the past two decades to many IT shops in various sizes, filed-tested, and proven solid. The common core bits in Windows infrastructure and client products lay a strategic layer to ensure seamless integration and stability among products and solutions. Windows Server family and System Center suite together fundamentally prepare IT with a technically-sound architecture and financially sustainable computing platform in the long run.
For IT decision making, there is a private cloud assessment tool to facilitate what-if analysis. Above all, IT professionals must recognize the sense of urgency and transition in cloud computing, or be left out. With the increasing pressure to produce faster and more with less, for enterprise IT, the old-school infrastructure-focused deployment model was then and gone, while building a private cloud and becoming user- and service-centric is now and beyond. Enterprise IT is no longer just about managing servers, networks, and storage; instead IT is about shortening go-to-market by delivering services, i.e. ITaaS and resource available on demand. This is not only a transformation of IT, but the survival of any IT establishment.
For IT professionals, there are quality and free training resources for developing technical depth on Windows Server and System Center platform to better prepare and become productive in a cloud engagement.
In Windows Server 2012, a Server Message Block (SMB) file share can now store virtual machine (VM) and SQL Server resources in addition to traditional end-user files like office documents. SMB protocol is a network file sharing protocol allowing applications to read and write to files and requesting services from server programs in a computer network. Windows Server 2012 introduces the new 3.0 version of SMB protocol. A Windows Server 2012 Hyper-V host can employ SMB 3.0 file shares as shared storage for storing virtual machine (VM) configuration files, VHDs, and snapshots. Further, SMB file shares can also store user database files of a stand-alone SQL Server 2008 R2. This is a significant feature and provides a capability such that VMs or databases can be dynamically migrated. The following schematic highlights these features.
Hyper-V over SMB
A Windows Server 2012 Hyper-V host can now store virtual machine configuration files, VHDs, and snapshots in file shares over the SMB 3.0 protocol. This can be used for both stand-alone file servers and clustered file servers that use Hyper-V together with shared file storage for the cluster. This feature requires:
SMB 3.0 file share is based on Windows security model and Active Directory infrastructure is required. And the computer account of an intended Hyper-V host employing Hyper-V over SMB will be granted access. The SMB file server must be a Windows Server 2012 with which SMB 3.0 protocol is available by default. One can also use non-Microsoft file servers that implement the SMB 3.0 protocol. For backward compatibility, Hyper-V does not block older versions of SMB, however, the Hyper-V Best Practice Analyzer (BPA) issues an alert when an older version of SMB is detected. Notice a computer that is running Hyper-V is not to be used as the file server for virtual machine storage. This forms a so-called “Loopback” configuration which is not supported.
Configuring SMB File Share
The process to create an SMB share is uneventful and with basic Windows user operations. From Server Manage, go to File and Storage Service and then click Shares of a target server as shown in the following screen capture. Here the included screen capture shows the target server RDVH is remotely accessed via Server Manager. Either form the dropdowns or simply right-clicking to bring up the menu and start the wizard for creating a new share.
Along the process, a system administrator will specify one of the five included file share profiles. For the advanced profile, a target server must install File Share Resource Management for quota control as shown below:
An SMB 3.0 file share comes with various settings. A system administrator can specify Windows to enumerate items based on access control and display only those files and folders a user has permissions to access. In such case, Windows will hide a user from seeing those files the user does not have Read permission. Caching for offline access is optional and an SMB 3.0 share is also Branch Caching ready. File access with encryption is readily available and enabled with a checkbox. The following shows a sample based on the SMB Share – Advanced profile with both caching and quota control enabled.
Hyper-V over SMB signifies a newly emerging trend of cost reductions on storage hardware and a progressing standardization of storage virtualization solutions. With SMB 3.0, a storage administrator can now working on file shares instead of managing storage fabric and logical unit numbers (LUNs). The concepts and operations are directly applicable with those skill sets of Windows system administration, which reduces the overall training and operating costs. The hardware of SMB 3.0 file shares are based on existing converged network with no specialized storage networking hardware, which increases the benefits of existing networks and essentially reduces capital expenditures. The ease of provisioning and management makes storage virtualization solution a much manageable and affordable solution with long-term reductions on capital and operating expenditures.
The traditional desktop computing model, as shown in Fig. 1, has been one where the operating system, applications, and user data and settings are bonded to a single computer. We will buy a computer either with OS and some applications pre-installed, or apply a hard disk image with targeted OS and selected applications to the computer hardware. Once a computer is deployed, a user can then log in the system, customize the environment, run applications, change settings, create data and files. This model is straightforward and easy to understand. With respect to desktop deployment, this means that the OS, application execution/presentation and user data are all self-contained within a single device. This model has the advantage of simplicity because it leverages well understood technologies that ship with Windows. In addition, because a PC with this model is configured to be completely self-sufficient, this solution is well-suited to mobile use. However, the tight binding between the various layers may not be a preference for all scenarios. This model has its limitations.
The tight couplings between each layer provide efficiency; they also introduce dependencies, hence complexities. And these complexities make it difficult for users to move the applications, settings, and files from one PC to another in case of upgrades or a lost or stolen laptop. When exemplified by thousands of desktops and laptops, as many enterprises do, the management of these laptops and desktops becomes a major concern. As mobile work force and the number of branch offices continue to grow with the proliferation of Internet and the advancement of networking technology, the work environment and data access patterns of information workers have become dynamic and been rapidly evolving. The long term maintenance associated with computing resources based on the traditional computing model is becoming cost-prohibitive for many companies, while impairing the IT’s ability to quickly prepare for or respond to a business opportunity.
Desktop Virtualization is the process of separating, or more precisely isolating, out these individual components, and managing each one separately. Fig. 2 shows by isolating these components, we can now abstract and virtualize the computing resources. Each layer can then reference a resource in other layers based in the abstraction or virtualization boundary and without specifying the specifics of how a referenced resource is configured within its host layer. Over all this reduces complexity and improves PC and application management.
When it comes to virtualization, not all solutions are equal. Microsoft has developed a number of virtualization solutions to address specific issues as depicted in Fig. 3. There are times a virtualization solution may not be cost-effective while offering deployment flexibility. It is crucial to recognize that and architect a virtualization solution accordingly to produce maximal business benefits.
This is the first article of a 5-part series examining the key architectural concepts and relevant operations of private cloud based on VMM 2012 including:
VMM, a member of Microsoft System Center suite, is an enterprise solution for managing policies, processes, and best practices with automations by discovering, capturing and aggregating knowledge of virtualization infrastructure. In addition to the system requirements and the new features and capabilities, there are specific concepts presented in this article, although fundamental, nevertheless important to know before building a private cloud solution with VMM 2012. This blog series also assume a reader has a basics understanding of cloud computing. For those not familiar with cloud computing, I recommend first acquiring the baseline information form: my 6-part series, NIST definition, Chou’s 5-3-2 Principle, and hybrid deployment.
Private Cloud in VMM 2012
Private cloud is a “cloud” which is dedicated to an organization ,hence private. Notice that the classification of private cloud or public cloud is not based on where a service is run or who owns the employed hardware. Instead, the classification is based on whom, i.e. the users, that a cloud is intended to serve. Which is to say that deploying a cloud to a company’s hardware does not automatically make it a private cloud of the company’s. Similarly a cloud hosted in hardware owned by a 3rd party does not make it a public cloud by default either.
Nevertheless, as far as VMM 2012 is concerned, a private cloud is specifically deployed with an organization’s own hardware, provisioned and managed on-premises by the organization. VMM 2012 succeeding VMM 2008 R2 represents a significant leap in enterprise system management and acts as a private cloud enabler to accelerate transitioning enterprise IT from an infrastructure-focused deployment model into a service-oriented user-centric, cloud-ready and cloud-friendly environment, as a reader will learn more of the capabilities of VMM 2012 throughout this series. The best way to evaluate VMM 2012, download and try it yourself.
And There Is This Thing Called “Fabric’
The key architectural concept of private cloud in VMM 2012 is the so-called fabric. Similar to what is in Windows Azure Platform, fabric in VMM 2012 is an abstraction layer to shield the underlying technical complexities and denote the ability to manage defined resources pools of compute (i.e. servers), networking, and storage in the associated enterprise infrastructure. This concept is explicitly presented in the UI of VMM 2012 admin console as shown here on the right. With VMM 2012, an organization can create a private cloud from Hyper-V, VMware ESX, and Citrix XenServer hosts and realize the essential attributes of cloud computing including self-servicing, resource pooling, and elasticity.
Service in VMM 2012
One noticeable distinction of VMM 2012 compared with previous versions of VMM and other similar system management solutions is, in addition to deploying VMs, the ability to roll out a service. I have taken various opportunities in my previous blogs emphasizing the significance of being keen on what is a service and what is cloud to fully appreciate the business values brought by cloud computing. The term, service, is used often indiscreetly to explain cloud and without a grip on what is precisely a service, cloud can indeed be filled with perplexities.
Essentially, the concept of a service in cloud computing is “capacity on demand.” So delivering a service is to provide a business function which is available on demand, i.e. ideally with an anytime, anywhere, and any device access. In private cloud, this is achieved mainly by a combination of self-servicing model, management of resource pooling, and rapid elasticity which are the 3 of the 5 essential characteristics of cloud computing. Specific to private cloud, the 2 other characteristics, i.e. broad access to and chargeback business models in the service (or simply the application since in the context of cloud computing, an application is delivered as a service) are non-essential since in a private setting an organization may not want to offer broad access to a service and a chargeback model may not always be applicable or necessary as already discussed elsewhere.
Particularly, a service in VMM 2012 is implemented by a set of virtual machines (VMs) working together to collectively deliver a business function. To deploy a service in VMM 2012 is therefore to roll out a set of VMs as a whole, as opposed to individually VMs. Managing all the VMs associated with a service as an entity, i.e. a private cloud, has its advantages and at the same time introduces opportunities and challenges as well for better delivering business values. Service Template is an example.
An exciting feature of VMM 2012 is the introduction of a service template, a set of definitions capturing all configuration settings for a single release of a service. As a new release of a service is introduced due to changes of the application, settings, or VM images, a new service template is as well developed. With a service template, a cloud administrator can deploy a service which consists of a set of VMs that are multi-tiered and possibly with multiple VM instances in individual tiers based on the service configuration. For instance, instead of deploying individual VMs, using a service template in VMM 2012 IT can now deploy and manage a typical web-based application with web frontends, business logic in a middle tier, and a database backend as a single service.
Private Cloud It Is
VMM 2012 signifies a milestone for enterprise IT to actually have a solution to operate like a service provider. As VMM 2012 soon to be released, IT as a service is becoming a reality. And while some IT professionals are concerning that cloud may take away their jobs, I am hoping on the contrary as reading through this series one will realize the energy and excitements cloud computing has already brought into our IT industry and broadened careers. I believe private cloud is as yet the greatest thing happens to IT. Every time anticipations and curiosities arise as I start envisioning so many possibilities IT can do with private cloud. It is inspiring to witness cloud computing coming true and be part of it. And I can’t help imagining an IT pro greases up hair, walking down the hall way of some datacenter, and shouting out….
I solve my problems and I see the light We gotta plug and think, we gotta feed it right There ain't no danger we can go too far We start believing now that we can be what we are Cloud is the word It's got groove, it's got meaning
I solve my problems and I see the light We gotta plug and think, we gotta feed it right There ain't no danger we can go too far We start believing now that we can be what we are Cloud is the word It's got groove, it's got meaning
[To Part 2, 3, 4 ,5]
Some noticeable advantages to run applications in Windows Azure are availability and fault tolerance achieved by the so-called fault domains and upgrade domains. These two terms represent important strategies in cloud computing for deploying and upgrading applications. System Center 2012 SP1 also has integrated the concepts into Virtual Machine Manager Service Template for deploying a private cloud.
The scope of a single point of failure is essentially a fault domain. And the purpose of identifying/organizing fault domains is to prevent a single point of failure. In a simplest form, a computer by itself connected to a power outlet is a fault domain. Apparently if the connection between a computer and its power outlet is off, this computer is down. Hence a single point of failure. As well, a rack of computers in a datacenter can be a fault domain since a power outage of a rack will take out the collection of hardware in the rack similar with what is shown in the picture here. Notice that how a fault domain is formed has much to do with how hardware is arranged. And a single computer or a rack of computers is not necessarily an automatic fault domain. Nonetheless, in Windows Azure a rack of computers is indeed identified as a fault domain. And the allocation of a fault domain is determined by Windows Azure at deployment time. A service owner can not control the allocation of a fault domain, however can programmatically find out which fault domain a service is running within.Windows Azure Compute service SLA guarantees the level of connectivity uptime for a deployed service only if two or more instances of each role of a service are deployed.
On the other hand, an upgrade domain is a strategy to ensure an application stays up and running, while undergoing an update of the application. Windows Azure when possible will distribute instances evenly into multiple upgrade domains with each upgrade domain as a logical unit of a deployment. When upgrading a deployment, it is then carried out one upgrade domain at a time. The steps are: stopping the instances running in the first upgrade domain, upgrading the application, bringing the instances back online followed by repeating the steps in the next upgrade domain. An upgrade is completed when all upgrade domains are processed. By stopping only the instances running within one upgrade domain, Windows Azure ensures that an upgrade takes place with the least possible impact to the running service. A service owner can optionally control how many upgrade domains with an attribute, upgradeDomainCount, in Windows Azure Service Definition Schema, .csdef file.
Within a fault domain, the concept of fault tolerance does not apply. Since all is either up or down and with no tolerance of a fault. Only when there are more than one fault domains and managed as a whole, is fault-tolerance applicable. In addition to fault domains and upgrade domains, to ensure fault tolerance and service availability, Windows Azure also has network redundancy built into routers, switches, and load-balancers. FC also sets check-points and stores the state data across fault domains to ensure reliability and recoverability.
This particular blog post presents the routines to conduct a RDS Quick Start session-based deployment, which is also an accelerated learning roadmap of RDS in Windows Server 2012. These routines build the essential skills and set the foundation for later carry out a Microsoft’s Virtual Desktop Infrastructure (VDI) deployment. Those who would like get familiar with RDS should first review the article, RDS Architecture Explained.
RDS is the delivery vehicle of Microsoft RemoteApp programs and VDI. In enterprise IT strategies, RDS plays an important role in adopting consumerization of IT and BYOD (or Bring Your Own Device) initiatives by minimizing application and desktop device requirements down to almost just an HTTP session for anytime, anywhere, any network access.
In Windows Server 2008 releases, setting up RDS can be a daunting task. There are many moving parts with various configurations, polices, certificates, etc. to integrate together. This is however not the case anymore. Now in Windows Server 2012, the RDS deployment and maintenance processes have been dramatically simplified and automated with a smooth and rich user experience as presented later in this article.
Above all, RDS realizes flexible desktop concept and the so-called modern work-style where authorized LOB applications with location and device transparencies following a user and not the other way around. RDS is becoming an essential part of enterprise infrastructure for enabling application deployment as a service.
The complexities of what happens under the hood in RDS can easily overwhelm even an experienced Windows administrator. Windows Server 2012 introduces the so-called Quick Start deployment. And as the name suggests it minimizes the infrastructure requirement and makes a deployment a very quick and straightforward process.
Quick Start is an option in RDS deployment during the process of adding roles and features with Windows Server 2012 Service Manager. It dramatically simplifies the deployment process and shortens go-to-market while still providing the ability to add additional RDS servers as needed. The abstraction formed by RDWA, RDCB, and RDSH offers such elegancy that the Quick Start process integrates the three and deploy all to one server in a process rather uneventful. For
For prototyping a centralized remove access environment, demonstrating and testing a VDI solution, or simply building a study lab for self-training, Quick Start is a fast track for getting RDS up and running in a matter of minutes.
At this time, RDS session-based deployment is in place with three sample RemoteApp programs published. Let’s examine the user experience of accessing RDS RemoteApp programs.
Once RDS RemoteApp programs are published, a user can simply access https://the-RDWA-Server-URL/rdwab. Once authenticated, authorized RemoteApp programs are presented to the user.
Leaping forward and setting a new standard for hybrid cloud, the next generation of enterprise computing, Windows Server 2012 is. While cloud computing is emerging as a service delivery platform, newly introduced capabilities and features make Windows Server 2012 not only a technology transformation vehicle of enterprise IT, but a skill upgrade and a career accelerator for IT professionals. Here, seven noticeable capabilities and features highlight the significance and a terse learning roadmap of this release. An evaluation copy of Windows Server 2012 in both ISO and VHD format is available at http://aka.ms/8.
1. Installation Options and Feature On Demand
There are now three installation options of Windows Server 2012 and changeable on demand. Server Core, Minimal Server Interface, and Server with GUI are the three types of installation while a system administrator can as needed change the installation type from one to another by installing or uninstalling associated windows features, as shown below, such that Server Manager, MMC, or full graphical interface can be available to facilitate administration or disabled for better security. The post, Windows Server 2012 Installation Options, has additional details. Additionally with Features On Demand, a new capability of Windows Server 2012, a user can remove or add back a server feature based on business needs with PowerShell commands.
With the installation options and Features On Demand, a system administrator can now as needed enable or disable the UI level of a server installation and operate with maximize productivity without permanently altering the installed payloads.
2. Storage Space and Thin Provisioning
With just a bunch of disks (or simply JBOD), Windows Server 2012 Storage Space offers an abstraction to manage selected disks from JBOD as one entity, a so-called Storage Pool. In File and Storage Services of Server Manager, a user can first define a storage pool, create a virtual disk which defines the storage layout of a Storage Pool, then format and create a volume out of the virtual disk file. Windows Server 2012 can operate as a software RAID controller and offers three storage layouts, Simple/Mirror/Parity comparable to RAID 0/1/5. The storage capacity of a Storage Pool includes Thin and Fixed provisioning methods while the formal can over-subscribe and allocate storage capacity just in time and the latter acquires the specified storage capacity upon implementing configuration. The post, Windows Server 2012 Storage Virtualization Explained, further examines the concept.
3. Hyper-V over SMB
Server Message Block (SMB) is a network file sharing protocol allowing applications to read and write to files and requesting services from server programs in a computer network. Windows Server 2012 introduces the new 3.0 version of SMB protocol. A Windows Server 2012 Hyper-V host can employ SMB 3.0 file shares as shared storage for storing virtual machine (VM) configuration files, VHDs, and snapshots. Further, SMB file shares can also store user database files of a stand-alone SQL Server 2008 R2. This capability is significant since VMs or databases can now be dynamically migrated into a SMB 3.0 share which is natively supported by Windows Server 2012.
4. Continuously Available Fire Share
Scale-out file servers, one of the available cluster roles, are built on top of the Failover Clustering feature of Windows Server 2012 and the SMB 3.0 protocol enhancements. Scale-out file servers can scale the capacity upward or downward dynamically as needed. On other words, one can start with a low-cost solution such as a two-node file server, and then later add additional (up to 4) nodes without affecting the operation of the file server. The following depicts the logical steps of constructing a Continuously Available File Share infrastructure in a Windows Server 2012 environment.
SMB 3.0 is capable of simultaneous access to data files with direct I/O through all the nodes in a file server cluster. This improves the usage of network bandwidth and load balancing of the file server clients, and also optimization of performance for server applications. SMB scaling out requires Custer Share Volumes version 2 which is included in Windows Server 2012.
5. Hyper-V Replica
With Hyper-V Replica, administrators can replicate their Hyper-V virtual machines from one Hyper-V host at a primary site to another Hyper-V host at the Replica site. This feature lowers the total cost-of-ownership for an organization by providing a storage-agnostic and workload-agnostic solution that replicates efficiently, periodically, and asynchronously over IP-based networks across different storage subsystems and across sites. This scenario does not rely on shared storage, storage arrays, or other software replication technologies.
For small and medium business, Hyper-V replica is a technically easy to implement and financially very affordable disaster recovery (DR) solution.
Information of Hyper-V Replica in a cluster setting is also available at http://aka.ms/ReplicaBroker.
6. Shared Nothing Live Migration
Live Migration is the ability to move a virtual machine from one host to another while powered on without losing any data or incurring downtime. With Hyper-V in Windows Server 2012, Live Migration can be performed on VMs using shared storage (SMB share) or on VMs that have been clustered.
Windows Server 2012 also introduces a new shared nothing live migration where it needs no shared storage, no shared cluster membership. All it requires is a Gigabit Ethernet connection between Windows Server 2012 Hyper-V hosts. With shared nothing live migration, a user can relocate a VM between Hyper-V hosts, including moving the VM's virtual hard disks (VHDs), memory content, processor, and device state with no downtime to the VM. In the most extreme scenario, a VM running on a laptop with VHDs on the local hard disk can be moved to another laptop that's connected by a single Gigabit Ethernet network cable.
One should not assume that shared-nothing live migration suggests that failover clustering is no longer necessary. Failover clustering provides a high availability solution, whereas shared-nothing live migration is a mobility solution that gives new flexibility in a planned movement of VMs between Hyper-V hosts. Live migration supplements failover clustering. Think of being able to move VMs into, out of, and between clusters and between standalone hosts without downtime. Any storage dependencies are removed with shared-nothing live migration.
7. Hyper-V Network Virtualization
Network virtualization is conceptually very similar to server virtualization which is to run multiple server instances on the same hardware while each server instance as if runs on dedicated hardware. The following depict the concept.
The implementations of network virtualization include two methods:
GRE employs one host IP address with minimal burden on the switches and full MAC headers and explicit Tenant Network ID marking for traffic analysis, metering and control. Address Rewrite works on existing NICs, switches, and network appliances and is immediately and incrementally deployable today. Windows Server 2012 Hyper-V leverages both methods.
The following is a sample of Hyper-V network virtualization in action while the two domains, contoso.yc and fabrikam.yc, are running on the same Hyper-V host on the same NIC and separate VLANs. The shown DNS configuration on the DC of each domain is with the same IP configurations while the two domains are isolated from each other and as if either runs on a dedicate network fabric.
For enterprise IT, private cloud is becoming the next big thing to build upon virtualization. Just review any technical media and you will find private cloud is mentioned and discussed over and over again. While many may argue that highly virtualized computing is a private cloud, there are fundamentally different. One key idea of private cloud is a service-based deployment, as opposed to virtualization which is a virtual machine-focused roll-out. The significance of private cloud in many aspects of IT will become immediately apparent once you have a chance to build and test one. I highly encourage you to take opportunities and download the listed trial software, practice, and become familiar with the basic admin of Windows Server 2008 R2 SP1 and System Center Virtual Machine Manager 2012 to best prepare yourself. The technologies are already there and the opportunity has arrived for you to become the next private cloud expert.
Yung Chou's Slides (PDF) and other shared resources
Must-read for IT pros:
Free eBooks, Trainings, and Downloads:
Essential Certifications for Private Cloud:
VHD is a file format employed in Microsoft virtualization solution. Essentially it operates and behaves much just like a physical hard disk, while in fact it is a file. There has been much information already available regarding VHD and those who are not familiar with this format should review Virtual Hard Disk Getting Started Guide first. There are various way to create and manage a VHD. For those who are deployment focused or prefer operating via a command prompt, DiskPart is available. On the other hand, with GUI there are also Hyper-V Manager and Disk Manager with VHD operations. In this post, the focuses are on the VHD operations with Hyper-V Manager. And there are really just three routines: creating, editing, and inspecting a VHD. One can start these routines from Action dropdown menu and Actions pane of Hyper-V manager once a Hyper-V host is highlighted. To create, edit, or inspect a VHD, simply click the corresponding option as shown above.
The following individual routines present the user experience after a user starts a particular routine by clicking a particular option indicated by the top level heading. Also notice that the term, VHD, depending on the context stands for either a virtual hard disk itself or the format of a virtual hard disk.
This type allocates storage at VHD creation time. The size of a Fixed Size or Fixed VHD, as the name indicates, stays the same throughout the life of a disk. Since all available storage is allocated at creation time, a Fixed VHD offers a predictable and best performance on operations relevant to storage allocation and is recommended for production use.
In the process, Windows Server 2012 defaults the format of a new blank VHD to VHDX and the size to 127 GB. Here, the shown routine reset the size and created a 5GB VHD on the local hard disk. The 5 GB size here is chosen due to limited disk space availability on the associated hard disk. To create a VHD for installing OS, for example, the size of the VHD should be large enough to include OS, patches, applications, temp storage, page files, buffer space, etc.
This type of a VHD is first created with just housekeeping (or header/footer) information, i.e. the name, location, maximum size, etc. of the disk. As data are written into a Dynamic VHD, the total size of the VHD will grow accordingly. Here is a routine to create a 5 GB Dynamic VHD.
So a Dynamically VHD is rather small in size when first created and the size grows as data are written into the disk. At any given time, a Dynamic VHD is with a size of the actual data written to it and the housekeeping information. Notice, upon deleting data from a Dynamic VHD, the space of those deleted data is not reclaimed till an Edit Disk/Compact operation is operated upon which.
A Dynamic VHD is recommended for development and testing, since relatively small footprint to manage. A server intended to run applications not disk intensive is also a possible candidate for a Dynamic VHD. Still when it comes to performance, a Fixed VHD always performs better than a comparable Dynamic VHD in most scenarios by roughly 10% to 15% with the exception of 4k writes, where Fixed VHD performs significantly better as documented in Hyper-V and VHD Performance - Dynamic vs. Fixed.
For backward compatibility, here is a routine to edit and change the format of a disk from VHDX to VHD. Since this operations will create a new disk with a copy of the source content, there is an opportunity to specify both the format and the type of the new disk. And here in addition to the format, the type is changed from Fixed to Dynamic. In other words, the operations to convert a VHD in effect copy the source disk to a newly created disk with a specified format and a selected type.
Converting a format does not apply to a Differencing VHD since both the format and the type are dependencies between a child disk and its parent and not to be changed for the parent-child link to work, although the Convert option is available for a Differencing VHD.
To increase the size of a Dynamic VHD, edit and expand the disk. The process is fairly straightforward.
To permanent introduce changes captured in a child disk, edit a child disk and select the option to merge the child disk into the parent disk. On the left, the process shows that the changes can be directly merged into the parent disk itself or a newly created Dynamic or Fixed disk. This routine is likely to follow a successful test/validation of a target patch or a new device driver against a child disk with an existing deployment image as the parent disk, for example.
Aside from public cloud, private cloud, and something in between, the essence of cloud computing is fabric. The 2nd article of this 5-part series is to annotate the concept and methodology of forming a private cloud fabric with VMM 2012. Notice that throughout this article, I use the following pairs of terms interchangeably:
And this series includes:
Fabric in Windows Azure Platform: A Simplistic, Yet Remarkable View of Cloud In cloud computing, fabric is a frequently used term. It is nevertheless not a product, nor a packaged solution that we can simply unwrap and deploy. Fabric is an abstraction, an architectural concept, and a state of manageability to conceptually denote the ability to discover, identify, and manage the lifecycle of instances and resources of a service. In an oversimplified analogy, fabric is a collection of hardware, software, wiring, configurations, profiles, instances, diagnostics, connectivity, and everything else that all together form the datacenter(s) where a cloud is running. While Fabric Controller (FC, a terminology coined by Windows Azure Platform) is also an abstraction to signify the ability and designate the authority to manage the fabric in a datacenter and all intendances and associated resources supported by the fabric. As far as a service is concerned, FC is the quintessential owner of fabric, datacenters, and the world, so to speak. Hence, without the need to explain the underlying physical and logical complexities in a datacenter of how hardware is identified and allocated, how a virtual machine (VM) is deployed to and remotely booted form bare-metal, how application code is loaded and initialized, how a service is started and reports its status, how required storage is acquired and allocated, and on and on, we can now summarize the 3,500-step process, for example, to bring up a service instance in Windows Azure Platform by virtually saying that FC deploy a service instance with fabric. Fundamentally a PaaS user expects is a subscribed runtime (or “platform” as preferred) environment is in place so cloud applications can be developed and run. And for an IaaS user, it is the ability to provision and deploy VMs on demand. How a service provider, in a private cloud setting that normally means corporate IT, makes PaaS and IaaS available is not a concern for either user. As a consumer of PaaS or IaaS, this is significantly helpful and allows a user to focus on what one really cares, which is a predictable runtime to develop applications and the ability to provision infrastructure as needed, respectively. In other words, what happens under the hood of cloud computing is collectively abstracted and gracefully presented to users as “fabric.” This simplicity brings so much clarity and elegance by shielding extraordinary, if not chaotic, technical complexities from users. The stunning beauty unveiled by this abstraction is just breathtaking.
Fabric Concept and VMM 2012
Similar to what is in Windows Azure Platform, fabric in VMM 2012 is an abstraction to hide the underlying complexities from users and signify the ability to define and resources pools as a whole. This concept is explicitly presented in the UI of VMM 2012 admin console as shown here on the right. There should be no mystery at all what is fabric of a private cloud in VMM 2012. And a major task in the process of building a private cloud is to define/configure this fabric using VMM 2012 admin console. Specifically, there are 3 definable resource pools:
Clearly the magnitude and complexities are not on the same scale comparing the fabric in Windows Azure Platform in public cloud and that in VMM 2012 in private cloud. Further there are also other implementation details like replicating FC throughout geo-disbursed fabric, etc. not covered here to complicate the FC in Windows Azure Platform even more. The ideas of abstracting those details not relevant to what a user is trying to accomplish are nevertheless very much the same in both technologies. In a sense, VMM 2012 is a FC (in a simplistic form) of the defined fabric consisting of Servers, Networking, and Storage pools. And in these pools, there are functional components and logical constructs to collectively constitute the fabric of a private cloud.
This pool embodies containers hosting the runtime execution resources of a service. Host groups contains virtualization hosts as the destinations where virtual machines can be deployed based on authorization and service configurations. Library servers are the repositories of building blocks like images, iso files, templates, etc. for composing VMs. To automatically deploy images and boot a VM from bare-metal remotely via networks, pre-boot execution environment (PXE) servers are used to initiate the operating system installation on a physical computer. Update servers like WSUS are for servicing VMs automatically and based on compliance policies. For interoperability, VMM 2012 admin console can add VMware vCenter Servers to enable the management of VMware ESX hosts. And of course, the consoles will have visibility to all authorized VMM servers which forms the backbone of Microsoft virtualization management solution.
In VMM 2012, the Networking pool is where to define logical networks, assign pools of static IPs and MAC addresses, integrate load balancers, etc. to mash up the fabric. Logical networks are user-defined groupings of IP subnets and VLANs to organize and simplify network assignments. For instance, HIGH, MEDIUM, and LOW can be the definitions of three logical networks such that real-time applications are connected with HIGH and batch processes with LOW based based on specified class of service. Logical networks provide an abstraction of the underlying physical infrastructure and enables an administrator to provision and isolate network traffic based on selected criteria like connectivity properties, service-level agreements (SLAs), etc. By default, when adding a Hyper-V host to a VMM 2012 server, VMM 2012 automatically creates logical networks that match the first DNS suffix label of the connection-specific DNS suffix on each host network adapter.
In VMM 2012, you can configure static IP address pools and static MAC address pools. This functionality enables you to easily allocate the addresses for Windows-based virtual machines that are running on any managed Hyper-V, VMware ESX or Citrix XenServer host. This feature gives much room for creativities in managing network addresses. VMM 2012 also supports adding hardware load balancers to the VMM console, and creating associated virtual IP (VIP) templates which contains load balancer-related configuration settings for a specific type of network traffic. Those readers with networking or load-balancing interests are highly encouraged to experiment and assess the networking features of VMM 2012.
With VMM 2012 admin console, an administrator can discover, classify, and provision remote storage on supported storage arrays. VMM 2012 uses the new Microsoft Storage Management Service (installed by default during the installation of VMM 2012) to communicate with external arrays. An administrator must install a supported Storage Management Initiative – Specification (SMI-S) provider on an available server, followed by adding the provider to VMM 2012. SMI-S is a storage standard for operating among heterogeneous storage systems. VMM 2012 automates the assignment of storage to a Hyper-V host or Hyper-V host cluster, and tracks the storage that is managed by VMM. Notice that storage automation through VMM 2012 is only supported for Hyper-V hosts.
Where There Is A Private Cloud, There Are IT Pros
Aside from public cloud, private cloud, and something in between, the essence of cloud computing is fabric. And when it comes to a private cloud, it is largely about constructing/configuring fabric. VMM 2012 has laid it all out what fabric is concerning a private cloud and a prescriptive guidance of how to build it by populating the Servers, Networking, and Storage resource pools. I hope it is clear at this time that, particularly for a private cloud, forming fabric is not a programming commission, but one relying much on the experience and expertise of IT pros in building, operating, and maintaining an enterprise infrastructure. It’s about integrating IT tasks of building images, deploying VMs, automating processes, managing certificates, hardening securities, configuring networks, setting IPsec, isolating traffic, walking through traces, tuning performance, subscribing events, shipping logs, restoring tables, etc., etc., etc. with the three resource pools. And yes, it’s about what IT professionals do everyday to keep the system running. And that brings us to one conclusion.
Private cloud is the future of IT pros. And let the truth be told “Where there is a private cloud, there are IT pros.”
A follow-up of this posting with screencast is available.
Event subscription has been one of the most requested server features by sys admins. Combined with task scheduling, this is a cost-effective and customizable tool to get a consolidated view of monitored activities and events in target servers, and timely issue alerts. In Windows Server 2008 subscribing and forwarding events with triggers to send out alerts can be done very easily as the following:
1. Create a subscription from Event Viewer.
2. Configure the subscription based on your requirements. The shown configuration settings are for demonstration and not necessarily recommended.
Make sure clicking User and Password and providing the user credentials.
3. Once configured, the subscription is listed as ready. Right-click to start running the task.
4. Now the subscribed events will be listed under the Forward Events log. Notice a subscribed event may take some time to show up in the log after it has occurred at a targeted server. If user credentials and minimizing the latency are specified, the forwarding should happen within a minute or so.
To schedule a task for sending out alerts upon the arrival of a subscribed event,
1. Start Task Scheduler from the Administrative Tools.
2. Configure the task based on your requirements. The shown configuration settings are for demonstration and not necessarily with recommended.
3. Once configured, the task is listed as ready. Right-click to start running the task.
In Part 1, I talked about what “service” in the context of cloud computing means. Cloud is all about delivering services, i.e. making resources available on demand based on needs, paid by use, and.with the characteristics of ubiquitous network access, resource pooling, etc. Still we need to clearly define what cloud is. Without a common definition for a subject as broad as cloud computing it is hard to navigate through the overwhelming business and technical complexities. So here’s the six million question.
What Is Cloud
It is important to understand that there are services delivery models and deployment models. And both are needed to fully describe what cloud is. There are 3 ways to deliver services via cloud.
or SaaS is a model where an application is available on demand. It is the most common form of cloud computing delivered today. Microsoft Office 365 including: Exchange Online, SharePoint Online, Lync Online and the latest version of Microsoft Office Professional Plus suite is an SaaS offering to businesses.
or PaaS is a platform available on demand for development, testing, deployment and on-going maintenance of applications without the cost of buying the underlying infrastructure and software environments. Windows Azure Platform is a cloud-computing platform on which Microsoft’s internal IT (MSIT) organization has quickly built and deployed the Social eXperience Platform (SXP) to enable social media capabilities across Microsoft.com as documented.
On deployment, there are two base models. Public cloud is cloud computing made available through Internet to the general public or targeted users and is owned by an organization offering cloud services. An example is Microsoft Windows Live as free public cloud offerings for consumers, and Microsoft Online: Office 365 for businesses. Private cloud, on the other hand, is cloud available solely for an organization regardless if the cloud capabilities are managed by the organization or a third party and exists on premise or off premise. Based on the two models, some derive additional models like hybrid cloud, community cloud, etc. to highlight the implementation or intended audiences. For private cloud, two service delivery models: PaaS and IaaS are applicable since in a private setting, one can not deliver SaaS without having PaaS in place. Noticeis a solution for building private cloud. Hyper-V Cloud is a set of initiatives, guidelines, and offerings to help emperies deliver IaaS in a managed environment. Also the above mentioned delivery models are significant since once a model is selected to fulfill business objectives, responsibilities are implicitly agreed upon and accepted by the party hosting the cloud facility and the other subscribing the services.
Separation of Responsibilities
An important attribute of Cloud Computing is the separation of a subscriber’s responsibilities from those of a service provider’s. And by subscribing a particular service delivery model, a subscriber in essence agrees to relinquish certain level of access to and control over resources managed by the service provider. As I have discussed in Cloud Computing Primer for IT Pros, we must recognize and be pre-occupied with the limitations of each service delivery model when assessing Cloud. When a particular function or capability like security, traceability, or accountability is needed yet not provided with an intended delivery model, a subscriber needs to either negotiate with the service provider and put specifics in a service level agreement, or employ a different delivery model such that a desired function becomes available. Lack of understanding of the separation of responsibilities in my view frequently results in false expectations of what Cloud Computing can or cannot deliver.