Virtually speaking about Cloud Computing series by Yung Chou includes
About This Video
In this installment of Virtually Speaking, Yung Chou dives deeper into explaining the basics of cloud computing and how it works. Tune in as Yung goes through the 5 -3-2 principal of cloud computing as he examines the various service delivery models that are available as well as how each scenario can impact your business.
Video: WMV | MP4 | WMV (ZIP) | PSP Audio: WMA | MP3
Are you ready for Private Cloud? Take a free 10-minute assessment
If you're interested in learning more about the products or solutions discussed in this episode, click on any of the below links for free, in-depth information:
In today’s episode, Sr. IT Pro Evangelist, Yung Chou kicks off his new “Virtually Speaking” series as he sets a baseline understanding of what cloud computing is at its core, how it has evolved over several decades and what its true value is to IT Professionals, consumers and the technology industry as a whole.
Click here to check out TechNet Virtual Lab: System Center Virtual Machine Manager 2012 - Building a Service Template
This 5-part series listed as below highlights the steps to deploy StockTrader as a service to a private cloud using the service template. After successfully importing the StockTrader service template in VMM 2012 in Part 2, we can then customize the service template, as preferred, using Service Template Designer. Which is the scenario here.
In the admin console of VMM 2012, the Library workspace is the repository where all file-based resources including vhds, iso images, application packages, scripts, etc. are kept and made visible in the private cloud. The context sensitive UI will display utilities applicable to the type of a highlighted resource. The following illustrates a list of the resources indexed in all the associated network shares of an examined Library Server, here r2host.contoso.corp. The UI is content sensitive and presenting the utilities available for operating on Answers File which is the type of a service template.
Examine the Library workspace, in addition to Library Servers there are other containers including Templates and Profiles. And specifically Template container is where service templates are presented.
Service Template Designer
This is a new tool in VMM 2012 for authoring and examining a service template. In Library workspace, when highlighting a service template in Service Templates container, Service Template Designer becomes available in the group, Service Template Tools. Click Open Designer, as illustrated below, to load the service template into Service Template Designer.
Application Architecture Encapsulated in Service Template
When the StockTrader service template is loaded in the Designer, a visual presentation of the application architecture reveals a four-tier design signified by the four corresponding VM templates, as depicted below, including a web front-end, business service layer, operations layer, and a database back-end. The four VMs are connected with a logical network and collectively form the application architecture of StockTrader. The what ,when, why, and how to configure these VMs are specified in the VM templates. Further examination reveals that within these VM templates there are dependencies defined including: two web applications are to be configured in the Web Tier, a server app-v package is to be deployed in each of the two Mid Tier machines, and a number of dacpac packages are to be installed to the SQL Tier during VM instances creation.
Properties of the service or an examined VM template provide the configuration details with intelligence of why, when, and how to instantiate these objects.
The accessibility of a service can be directly defined in a service template. The following shows StockTrader Service Owner, a defined self-service user role, is given the access to the StockTrader service.
Within VM template properties, VMM 2012 now includes features and settings with cloud computing in mind. Scalability, memory optimization, server roles and features designation, operational intelligence and precision, SQL deployment, quota are just a few highlighted below.
These properties shown above are validated and customize, as preferred. Once done, the service template is almost ready for deployment. Almost, however not yet. Part 4 will explain.
[To Part 1, 2, 4, 5]
To deploy an application as a service to a private cloud in VMM 2012, a service template is the key. In this second article of the 5-part blog post series as shown below, let’s walk through the process to make a service template ready for use.
For those who would like to build a test lab, download Windows Server 2008 R2 SP1 and System Center products including VMM 2012. There are also free eBooks and posters illustrated many important concept of virtualization.
This is a main delivery of VMM 2012. And a noticeable differentiator from VM 2008 R2 is VMM 2012 is designed with the service concept and a private cloud in mind. A service in VMM 2012 is a set of VMs collectively delivering a business function, and they are configured, deployed, operated, and managed as a whole. And a service template is a vehicle to realize the service concept.
Physically an XML file, a service template encapsulates “everything” needed to do a push-button deployment of an application architecture with a running instance of an target application. Just imagine all the knowledge and tasks other than hardware allocations involved in an application deployment from application architecture to configurations, operations, and procedures are all orchestrated and encapsulated in this XML file. Here the hardware allocation is managed by VMM 2012 with the private cloud fabric and transparent to an application. And specifically “everything” of an application deployment I mean:
Importing into Private Cloud Fabric
To deploy an application as a service into target private cloud, first make all resources relevant to the deployment visible in private cloud fabric. And this can be easily done by simply first xcopy the StockTrader package, as shown on the right, into a library share of a VMM 2012 already configured as part of the private cloud fabric. (The information to download StockTrader is detailed at the end of Part 1.) Then in the admin console of VMM 2012, import the service template as shown above.
By default VMM 2012 refreshes a library share in 60 minutes as shown below. Depending on how often changes are introduced as well as the network topology and bandwidth, this refresh interval should be set.
As needed, an administrator can simply right-click and manually refresh a library share in VMM 2012 admin console, as shown on the left here, to index and make a newly added resource available upon refresh. Once the application package appears in the library share, we can now import the StockTrader service template. As VMM 2012 reads in the content of a service template for the first time, all resources referenced by the service template are validated against the private cloud fabric settings. For instance, when developing/testing application in a development environment, the employed credentials and network naming are often different from those in production. Individual settings must be validated against the corresponding ones in a target environment. Once validated, the application and associated resources become ready for employment in the private cloud fabric.
Recall that fabric is an important abstraction in cloud computing and signifies the ability to discover, identify, and manage computing resources. The presumption is that if a resource is added into one of the three resource pools in private cloud fabric, it can be discovered, identified, and managed by VMM 2012. And the importing process is in essence to examine a service template and flag settings for corrective actions, as applicable, such that all resources referenced by the service template are validated via an associated library servers where the resources reside.
The following illustrates the process of importing the StockTrader service template. If you want to import sensitive settings such as passwords, product keys, and application and global settings that are marked as secure, select the Import sensitive template settings check box. If you do not want to import sensitive data, you can update the references later the import process.
When VMM 2012 examines a service template, those references not properly resolved are list with yellow waning triangles. In such case, edit and validate an entry by clicking the pencil icon. Each entry with a red cross is actually an indicator that the referenced resource is validated, as shown below.
Like many Microsoft products, behind the scene, it is implemented with PowerShell. And a set of scripts associated to a series of operations with specified settings can be easily generated for later batch processing and automation. The following shows the View Scripts button available for generating PowerShell script during a service template import process.
Upon a successfully import, the service template is now listed as a resource available for deployment. Check the properties, as shown below, to reveal important information including service settings and dependencies defined in the service template.
StockTrader is a 4-tier application and in the service template properties, as illustrated below. The VHD, server app-v package, customization scripts, etc. to be installed are all listed under an associated VM template. When instantiating a VM instance, these dependencies become in effect and ensure all requirements are orchestrated and met along a deployment process.
At this time, the StockTrader service template is successfully imported and ready for use. Next is to examine the application architecture defined and configured in the service template. Life is good so far.
[To Part 1, 3, 4, 5]
To accelerate the learning of private cloud, a direct and effective way is to walk through the process of deploy one. And that is what this blog post and screencast series will deliver by detailing the essential operations and steps to deploy and manage a service deployed to a private cloud in SCVMM 2012 including:
The process I am focusing on in this series starts from the signoff of a to-be-deployed application, here StockTrader. And in this series, I as a Private Cloud Administrator will walk through the process to “deploy” StockTrader as a service to a target private cloud. How the application was developed, configured, and packaged are not the subjects here. How it is to be deployed as a service to a target private cloud is. Deploying and managing an application as a service is an important concept and a key delivery of VMM 2012. The following further explains.
Notice that in VMM 2012 a service means specifically a set of VMs which collectively delivers a business function. At operational level, this set of VMs can be configured, deployed, and managed as a whole, i.e. one entity. This is achieved in VMM 2012 by employing a service template. By predefining the application architecture with the content, configurations, deployment operations, and procedures of an intended application in a VMM 2012 service template, we can now essentially deploy an application architecture with a running instance of an intended application, i.e. deploy an application as a service. And by managing the instance of a service template, we are now managing all associated resources of a running instance of an intended application which may encompasses multiple VM instances in multiple tiers.
This is an end-to-end sample application based on Windows Communication Foundation and ASP.NET. StockTrader is designed as a high-performance application that can seamlessly scale out across multiple servers with load-balancing and failover at the service-request level. In addition, the application can be deployed to Windows Azure Platform, a private cloud, or a hybrid environments with securely communication between Windows Azure instances and on-premise services. It illustrates many of the .NET enterprise development technologies for building highly scalable, rich "cloud-connected" applications.
The StockTrader application package I downloaded from http://connect.microsoft.com (find more details at the end of in this blog post) includes pre-baked syspreped vhd images, application code, scripts, app-v packages, and a service template which defines the multi-tier application architecture, the operations and procedures, the dependencies and intelligence, etc. with VM templates. We will use the provided service template to deploy StockTrader as a service to a target private cloud.
From a consumer’s point of view, regardless where and how StockTrader is deployed, it is a web application. The cloud connotation is relevant to mainly a service provider to signify the ability to deploy, exhibit, and manage an application with the 5-3-2 principle of cloud computing or NIST SP 800-145.
The test lab is a simple environment including a windows domain with a VMM 2012 and a Hyper-V host as members. This lab is the starting point of a private cloud environment. It is a test lab, not an idea nor a realistic representation all components/functions needed to deliver a comprehensive private cloud solution. A comprehensive private solution including configuration management, deployment vehicle, process automation, service/help desk, virtual machine manager, self-service portal, etc. is what System Center 2012 delvers. For those who would like to build a test lab similar with mine, here is the hardware and software information:
You will need 64-bit hardware to build a Windows domain with a domain controller, a SCVMM 2012 server, and a Hyper-V host to get started with a simple yet realistic enough test lab. The Hyper-V host need to have access to the hardware since it needs to be a root/parent partition run virtual machines. A great poster to help you better understand Hyper-V is available at http://aka.ms/free. The rest two, a domain controller and a SCVMM 2012 box, can be physical or virtual machines. And as needed, other System Center 2012 family members can be later added into the environment to form a comprehensive private cloud solution. Having a SCVMM 2012 server and a Hyper-V host into a Windows domain is the beginning and the essentials to start building a private cloud solution.
I set up the environment with my laptop where the booted Windows Server 2008 R2 SP1, i.e. the root partition, is a Hyper-V host as a member of the contoso.corp domain which includes a domain controller and a SCVMM 2012 server are both virtual machines and each running as guest OS. The following are the hardware information.
As far as the hardware is concern, RAM is a significant resource in virtualization and where I will spend my money.
[To Part 2, 3, 4, 5]
To setup your Windows Server 2012 lab for the "Early Experts" Challenge and/or IT Camp, you'll need a PC that meets the following requirements:
NOTE: If your PC does not meet the above requirements, do not continue with this process. Instead, you may prefer to build a Windows Server 2012 box in the cloud by leveraging Windows Azure Virtual Machines. Building your lab in the cloud will allow you to complete most hands-on activities in this study group, but will not permit you to perform hands-on activities related to Windows Server 2012 Hyper-V.
DISCAIMER: This process installs Windows Server 2012 in a dual-boot scenario using Boot-to-VHD features in Windows Vista, Windows 7 and Windows 8. While this process is not intended to disrupt your existing OS installation, these steps are for use at your own risk. No support or warranties are implied or provided.
Download WMV Download | WMA | MP3
If you attended one of our Live Private Cloud TechNet events delivered recently in the major metros along the US east coast, hopefully we’ve inspired you to build out your own private cloud test environment with downloadable evaluation products. This is the 2nd episode of our post-event conference call as a follow-up to further discuss building private cloud with Windows Server 2008 R2 SP1 and System Center 2012. For those who are interested, there are also additional information, free ebooks, and posters available to better understand some of the enabling technologies of cloud computing.
Amid the many benefits of having the NIST SP 800-145 as a tool to facilitate the understanding, the classification and some definitions of the four deployment models are redundant and inconsistent. Particularly, the definition of “community cloud” is a redundant of that of a private cloud, the deployment models are defined with 2 set of criteria, and “hybrid cloud” is a confusing, ambiguous, and extraneous term.
SP 800-145 is the de facto standard in IT industry of describing what cloud computing is with five essential characteristics, three delivery methods, and four deployment models. The five essential characteristics well specify the qualifications and expected behaviors of an object qualified with the term, cloud. The three delivery methods signify the essence of cloud computing centered on the concept of a “service.” Both the characteristics and the delivery methods in SP 800-145 form a solid foundation and present a conceptual model envisioning what cloud computing is and about. SP 800-145 gets inconvenient where the four deployment models including public, community, private, and hybrid clouds are defined, as shown below.
Reviewing the definitions of the first three deployment models, there is a common theme. Among public, community, and private clouds, the classification is based on the intended audiences whom a cloud with its resources is dedicated to. Namely, a public cloud is intended to be consumed by the general public and a private cloud is dedicated to a single organization, i.e. for a targeted group of users. SP 800-145 classifies a private cloud and a public cloud with consistent criteria.
It is important to recognize that building a cloud with owned hardware does not default it as a private cloud of the owner’s, while a cloud with accessibility via Internet or operated by an internet service provider does not automatically make it a public cloud either. Again, the intended audiences determine it is a private or public cloud. Although many seem to default a private cloud as an on-premise deployment to owned hardware, this is nonetheless not a requirement of a private cloud.
Further “public” here does not suggest that it is free or accessible anonymously. It simply means the cloud is dedicated for the general public to consume, while there can be business or administrative restrictions imposed. Microsoft Office 365 available based on a subscription and Hotmail requiring a Live ID to sign are vivid examples of public cloud offerings with restrictions.
Inconvenience #1: The classification of “community cloud” is extraneous.
A community cloud according to 800-145 is a cloud for a specific community of consumers from organizations. As far as a member of the associated community is concerned, a community cloud is indeed a private cloud for that particular community. The number of the organizations and the administrative boundaries encompassing a community are irrelevant since from a private cloud’s view point, an authorized user is an authorized user regardless which organization one belongs to. A cloud for a community of users from either various departments, business units within a company or business partners from companies in many parts of the world is essentially a private cloud dedicated for that community.
Inconvenience #2: Using two sets of criteria to define cloud deployment models roots inconsistency and ambiguity.
As defined in SP 800-145, a hybrid cloud is a composition of infrastructures, yet at the same time a private cloud and a public cloud are defined according to their intended audiences. The change of criteria in classifying a hybrid cloud roots inconsistency and ambiguity in the deployment models presented in SP 800-145. Forming a concept with two sets of criteria is simply a confusing way to describe an already very confusing subject like cloud computing.
Inconvenience #3: "Hybrid cloud" is an ambiguous, confusing, and frequently misused term.
A hybrid cloud is a composition of two or more distinct cloud infrastructures (private, community, or public) as stated in SP 800-145. That is to say that a hybrid cloud can be a composition of private/private, private/community, private/public, etc. From a consumer’s point of view, they are in essence a private cloud, a private cloud, and a public or private cloud respectively. Regardless how a hybrid cloud is constructed, if it is intended for public consumption it is a public cloud, and if for a particular group of people it is then a private cloud according to SP 800-145. Essentially the composition of clouds is still a cloud and it is an either public or private cloud, and cannot be both at the same time.
For many enterprises IT professionals, a hybrid cloud means an on-premise private cloud connected with some off-premise resources. Notice these off-premise resources are not necessary in reality a cloud. In such case, it is simply a private cloud with some extended boundaries. A cloud is a set of capabilities and must be referenced in the context of the delivered application. Just placing a VM in the cloud or referencing a database placed in the cloud does not make the VM or the database itself a public cloud application.
The key is that a hybrid cloud is a derived concept of clouds. Namely, a hybrid can be integrations, modifications, extensions, or a combination of all of cloud infrastructures. A hybrid is nevertheless not a new concept or a different deployment model and should not be classified as a unique deployment model in addition to the two essential ones, i.e. the public and private cloud models. A cloud is either public or private and there isn’t a third kind of cloud deployment model based on the intended users.
“Hybrid cloud” is perhaps a great catchy marketing term. For many, a hybrid seems to suggest it is advanced, leading edge, and magical, and therefore better and preferred. The truth is "hybrid cloud" is an ambiguous, confusing, and frequently misused term. It confuses people, interjects noises into a conversation, and only to further confirm the state of confusion and inability to clearly understand what cloud computing is.
Virtualization vs. private cloud has confused many IT pros. Are they the same? Or different? In what way and how? We have already virtualized most of my computing resources, is a private cloud still relevant to us? These are questions I have been frequently asked. This is the 2nd article of the series, as shown below, which answers these specific questions.
Be Mindful on What Is Private Cloud
Above all, a private cloud is a deployment model of cloud computing. And since it is cloud computing, the 5-3-2 Principle or NIST SP-800-145 as preferred, applies. When one claims having a private cloud, we can easily verified if it actually is by a few simple questions. Is it delivered with (or at least a flavor of) SaaS, PaaS, or IaaS? Does it has a self-servicing component? Is there resource pooling and is it elastic? Namely standardization, optimization, and automation of resource management are implemented in the underlying infrastructure, such that a fabric-like abstraction can be formed with resource pools to possibly offer location transparency. Which is a private cloud or a cloud application in general must exhibit.
Notice that the hardware ownership is not a criterion. For a private cloud, hardware ownership is not a necessary, but a sufficient condition. One does not need to own the hardware to have a private cloud. Nonetheless, many seems to implicitly assume hardware ownership is part of a private cloud. This is likely due to for enterprise IT to transition into a private cloud environment, the logical approach is to, as applicable, assess and convert what has been established into a private cloud. In such case, likely the infrastructure is already owned and operated by the IT department.
Be Clear on What Private Cloud Delivers
A private cloud is a cloud with the cloud infrastructure dedicated to an organization. On premises or hosted by a 3rd party, a private cloud is expected to exhibit three, if not more, of the five essential characteristics of cloud computing: resource pooling, rapid elasticity, and self-service to differentiate itself from highly virtualized computing. The point is as far as virtualization is concerned, the three key attributes: resource pooling, elasticity, and self-service are not necessary conditions. While for a private cloud, the three are. Reviewing the 5 characteristics which cloud computing delivers, you will notice that I do not include ubiquitous access and consumption-based chargeback model as the key attributes for a private cloud. These two are not considered as essential due to: for a private cloud, the owning organization may want to restrict the availability, instead of allowing ubiquitous access; and a chargeback model may not be administratively necessary or technically feasible, for example, to be implemented in an organization. The following depicts the concept:
There is no question that virtualization is a key technology enabler in transforming to a cloud environment. Without having a technically-mature and economically-affordable virtualization solution in place, cloud computing will be more a concept than an implementation. Nonetheless, make no mistake about it. virtualization is not, and in fact far from, a private cloud. Without resource pooling, elasticity, and self-servicing mechanism to fundamentally design efficiency into cloud architecture and minimize the overhead, implementing a private cloud can quickly become cost-prohibitive and without a predictable ROI in the long run. With these the key attributes and a virtualization solution, a private cloud therefore forms.
[Back to part 1]
More hybrid cloud resources at http://aka.ms/all.
You want to download this one!
Exchange Server 2010 Architecture Poster
A key feature delivered by VMM 2012 is the ability to deploy an application based on a service template which enables a push-button deployment of a target application infrastructure. VMM 2012 signifies a direct focus, embedded in product design, on addressing the entire picture of a delivered business function, rather than presenting fragmented views from individual VMs. VMM 2012 makes a major step forward and declares the quintessential arrival of IT as a Service by providing out-of-box private cloud product readiness for enterprise IT.
In this fourth article of the 5-part series on VMM 2012,
I further explain the significance of employing a service template.
This is in my view the pinnacle of VMM 2012 deliveries. The idea is apparent, to deliver business functions with timeliness and cost-effectiveness by standardizing and streamlining application deployment process. of Here I focus on the design and architectural concepts of a service template to help a reader better understand how VMM 2012 accelerates the process of a private cloud with consistency, repeatability, and predictability. The steps and operations to deploy and administer a private cloud with a service template will be covered in upcoming screencasts as supplements to this blog post series.
The term, service, in VMM 2012 means a set of VMs to be configured, deployed, and managed as one entity. And a service template defines the contents, operations, dependencies, and intelligence needed to do a push-button deployment of an application architecture with a target application configured and running according to specifications. This enables a service owner to manage not only individual VMs, but the business function in its entirety delivered as a (VMM 2012) service. Here, for instance, a service template developed for StockTrader is imported and displayed in the Service Template Designer of VMM 2012 as below revealing
Application Deployment as Service via IaaS
Since VMM 2008, Microsoft has offered private cloud deployed with IaaS. Namely a self-service user can be authorized with the ability to provision infrastructure, i.e. deploy VMs to authorized environment, on demand. While VMs can be deployed on demand, what is running within those VMs when and how is however not a concerned of VMM 2008.
VMM 2012 on the other hand is designed with service deployment and private cloud readiness in mind. In addition to deploying VMs, VMM 2012 can now deploy services. As mentioned earlier, a service in VMM 2012 is an application delivered by a set of VMs which are configured, deployed, and maintained as one entity. More specifically, VMM 2012 can deploy on demand not only VMs (i.e. IaaS), but VMs collectively configured as an instance of a defined application architecture for hosting a target application by employing a service template. As VMs are deployed, an instance of a defined application architecture is automatically built, and a target application hosted in the architecture becomes functional and available. VMM 2012 therefore converts an application deployment into a service via IaaS.
The Rise of Service Architect
Importantly, a service template capturing all relevancies of an application deployment is an integral part of the application development and production operations. A seasoned team member (whom I call Service Architect) with a solid understanding of application development and specifications, private cloud fabric construction, and production IT operations is an ideal candidate for authoring service templates.
Context and Operation Models
In a private cloud setting, enterprise cloud admin constructs fabric, validates service templates, and acts as a service provider. Service owners are those self-service users authorized to deploy services to intended private clouds using VMM 2012 admin console and act as consumers. Therefore, while enterprise IT constructs fabric and validates service templates, a service owner deploys services based on authorized service templates to authorized private clouds on demand. Notice a self-service user can access authorized templates and instances of VMs and services, private clouds, VM instances, etc. A self-service users nevertheless does not see the private cloud fabric in VMM 2012 admin console or App Controller.
Setting the context at an application level, a service owner deploys a service based on an authorized service template to an authorized private cloud on demand. And here a service owner acts as a service provider. At the same time, an authorized end user can access the application’s URL and acts as a consumer. In this model, an end user does not know and there is no need to know how the application is deployed. As far as a user is concerned, the user experience of accessing a private cloud is similar to accessing a web application.
Standardization, Consistency, Repeatability, and Predictability
What are specified in a service template including static definitions and pre-defined criteria of the what, how, when, and inter-dependency and event-driven information to automate the deployment process of an application. To be able to deploy an application multiple times with the same service template in the same environment, there is also instance information like machine names which are generated, validated, and locked down by VMM 2012 right before deployment when clicking Configure Deployment from Service Template Designer. The separation of instance information from static variables and event-driven operations among VMs of an application included in a service template offers an opportunity to standardize a deployment process with consistent configurations, repeatable operations, and predictable outcomes.
Service Template is in essence a cookie cutter which can reproduce content according to predefined specifications, in this case the shape of a cookie. A service based on a VMM 2012 service template can deployed multiple times on the same fabric, i.e. the same infrastructure, by validating the instance information in each deployment. This is similar to using the same cookie cutter with various cookie dough. The instances are different, the specifications are nonetheless identical.
Deployment with a service template can greatly simplify an upgrade scenario of an already deployed application. First, the production application infrastructure of StockTrader can be realistically and relatively easily mimicked in a test environment by configuring and deploying the same service template to a private cloud for development, such as an isolated logical network of 192.168.x.x subnet defined in the Network pool of the private cloud fabric. in VMM 2012. A new release, 2011.11.24 for example, of the application based on a service template (Release 2011.11) can then be developed and tested in this development environment.
Once the development process is concluded and the service template of Release 2011.11.24 is ready to be deployed, a cloud administrator can then import the service template and associated resources, as applicable, into the private cloud fabric, followed by validating the resource mapping so all references in Release 2011.11.24 are pointing to those in production. To upgrade an application from Release 2011.11 to Release 2011.11.24 at this point is simply a matter of applying the production instance to the service template of Release 2011.11.24. It is quite straightforward form VMM 2012 admin console by right-clicking the instance to be upgrades and setting a target template as show below.
This process is wizard-driven. Depending on how an application’s upgrade domain is architected, current application state, and the natures of changes, application outage may or may not be necessary. The following highlights a process of replacing a service template from Release 2011.11 to Release 2011.11.24 on an instance of StockTrader service.
There are different ways that a new service template can be applied to a running instance. For an authorized self-service user, the above process can also be easily carried out with App Controller, which I will detail in Part 5 of this blog post series.
In VMM 2012, deleting a running service will stop and erase all the associated VM instances. Nevertheless, the resources referenced in the service template are still in place. To delete a service template, all configure deployments and deployed instances must be deleted first.
As private clouds are built and services are deployed, releases of services can be documented by archiving individual service templates with associated resources. Notice this is not about backing up instances and data associated with the instances of an application, but to as preferred keep records of all resources, configurations, operations, and intelligence needed to successfully deploy the application.
With the maturity of virtualization and introduction of cloud computing, IT is changing with an increasing speed and the industry is transforming as we speak. VMM 2012 essentially substantiates the arrival of IT as a Service in enterprise IT. While the challenges are overwhelming, the opportunities are at the same time exciting and extraordinary. IT professionals should not and must not hesitate anymore, but get started on private cloud and get started now. Be crystal clear on what is cloud and why virtualization is far from cloud. Do master Hyper-V and learn VMM 2012. And join the conversation and take a leading role in building private cloud. With the strength of a new day dawning and the beautiful sun, there is no doubt in my mind that a clear cloudy day is in sight for all of us.
[To Part 1, 2, 3, 4, 5]
Have questions about Microsoft's Private Cloud solutions? If you attended one of the Live Private Cloud TechNet events, hopefully we’ve inspired you to build out your own test environment with our downloadable evaluation products. If you did, you may have some follow-up questions after the event. If you didn't attend an in-person event, but have questions regarding Private Cloud Computing all are welcome to join us during this fun, interactive, Q&A online session.
Download WMV Download | WMA | MP3
Although cloud computing is emerging as a main delivery vehicle for IT services, there is much confusion about what it is and how it works. This session brings clarity by examining the facts and common knowledge of IT and builds an essential body of knowledge about cloud computing with three easy-to-remember numbers. The discussion includes definitions of cloud computing, where the opportunities are, and how IT professionals can play an enabling role in transforming an existing IT establishment into a cloud-friendly, cloud-ready environment. We review the architecture and examine the service delivery models of cloud computing while we step through business scenarios and the operations of employing public cloud and private cloud.
By this time, I assume we all have some clarity that virtualization is not cloud. There are indeed many and significant differences between the two. A main departure is the approaches of deploying apps. In the 3rd article of the 5-part series as listed below, I would like to examine service-based deployment introduced in VMM 2012 for building a private cloud.
VMM 2012 has the abilities to carry out both traditional virtual machine (VM)-centric and emerging service-based deployments. The formal is virtualization-focused and operated at a VM level, while the latter is service-centric approach and intended for private cloud deployment.
This article is intended for those with some experience administering VMM 2008 R2 infrastructure. And notice in cloud computing, “service” is a critical and must-understand concept which I have discussed elsewhere. And just to be clear, in the context of cloud computing, a “service” and an “application” means the same thing, since in cloud everything to user is delivered as a service, for example SaaS, PaaS, and IaaS. Throughout this article, I use the terms, service and application, interchangeably.
In virtualization, deploying a server has becomes conceptually shipping/building and booting from a (VHD) file. Those who would like to refresh their knowledge of virtualization are invited to review the 20-Part Webcast Series on Microsoft Virtualization Solutions.
Virtualization has brought many opportunities for IT to improve processes and operations. With system management software such as System Center Virtual Machine Manager 2008 R2 or VMM 2008 R2, we can deploy VMs and installs OS to a target environment with few or no operator interventions. And from an application point of view, with or without automation the associated VMs are essentially deployed and configured individually. For instance, a multi-tier web application like the one shown above is typically deployed with a pre-determined number of VMs, followed by installing and configuring application among the deployed VMs individually based on application requirements. Particularly when there is a back-end database involved, a system administrator typically must follow a particular sequence to first bring a target database server instance on line by configuring specific login accounts with specific db roles, securing specific ports, and registering in AD before proceeding with subsequent deployment steps. These operator interventions are required likely due to lack of a cost-effective, systematic, and automatic way for streamlining and managing the concurrent and event-driven inter-VM dependencies which become relevant at various moments during an application deployment.
Despite there may be a system management infrastructure in place like VMM 2008 R2 integrated with other System Center members, at an operational level VMs are largely managed and maintained individually in a VM-centric deployment model. And perhaps more significantly, in a VM-centric deployment too often it is labor-intensive and with relatively high TCO to deploy a multi-tier application “on demand” (in other words, as a service) and deploy multiple times, run multiple releases concurrently in the same IT environment, if it is technically feasible at all. Now in VMM 2012, the ability to deploy services on demand, deploy multiple times, run multiple releases concurrently in the same environment become noticeably straightforward and amazing simple with a service-based deployment model.
In a VM-centric model, there lacks an effective way to address event-driven and inter-VMs dependencies during a deployment, nor there is a concept of fabric which is an essential abstraction of cloud computing. In VMM 2012, a service-based deployment means all the resources encompassing an application, i.e. the configurations, installations, instances, dependencies, etc. are deployed and managed as one entity with fabric . The integration of fabric in VMM 2012 is a key delivery and clearly illustrated in VMM 2012 admin console as shown on the left. And the precondition for deploying services to a private cloud is all about first laying out the private cloud fabric.
To deploy a service, the process normally employs administrator and service accounts to carry out the tasks of installing and configuring infrastructure and application on servers, networking, and storage based on application requirements. Here servers collectively act as a compute engine to provide a target runtime environment for executing code. Networking is to interconnect all relevant application resources and peripherals to support all management and communications need, while the storage is where code and data actually resides and maintained. In VMM 2012, the servers, networking, and storage infrastructure components are collectively managed with a single concept as private cloud fabric.
There are three resource pools/nodes encompassing fabric: Servers, Networking, and Storage. Servers contain various types of servers including virtualization host groups, PXE, Update (i.e. WSUS) and other servers. Host groups are container to logically group servers with virtualization hosting capabilities and ultimately represent the physical boxes where VMs can be possibly deployed to, either with specific network settings or dynamically selected by VMM Intelligent Placement, as applicable, based on defined criteria. VMM 2012 can manage Hyper-V based, VMware, as well as other virtualization solutions. During adding a host into a host group, VMM 2012 installs an agent on a target host which then becomes a managed resource of the fabric.
A Library Server is a repository where the resources for deploying services and VMs are available via network shares. As a Library Server is added into fabric, by specifying the network shares defined in the Library Server, file-based resources like VM templates, VHDs, iso images, service templates, scripts, server app-v packages, etc. are become available and to be used as building blocks for composing VM and service templates. As various types of servers are brought in the Server pool, the coverage expanded and capabilities increased as if additional fibers are weaved into fabric.
Networking presents the wiring among resources repositories, running instances, deployed clouds and VMs, and the intelligence for managing and maintaining the fabric. It essentially forms the nervous system to filter noises, isolate traffic, and establish interconnectivity among VMs based on how Logical Networks and Network Sites are put in place.
Storage reveals the underlying storage complexities and how storage is virtualized. In VMM 2012, a cloud administrator can discover, classify and provision remote storage on supported storage arrays through the VMM 2012 console. VMM 2012 fully automates the assignment of storage to a Hyper-V host or Hyper-V host cluster, and tracks the storage that is managed by VMM 2012.
Deploying Private Cloud
A leading feature of VMM 2012 is the ability to deploy a private cloud, or more specifically to deploy a service to a private cloud. The focus of this article is to depict the operational aspects of deploying a private cloud with the assumption that an intended application has been well tested, signed off, and sealed for deployment. And the application resources including code, service template, scripts, server app-v packages, etc. are packaged and provided to a cloud administrator for deployment. In essence, this package has all the intelligence, settings, and contents needed to be deployed as a service. This self-contained package can then be easily deployed on demand by validating instance-dependent global variables and repeating the deployment tasks on a target cloud. The following illustrated the concept where a service is deployed in update releases and various editions with specific feature compositions, while all running concurrently in VMM 2012 fabric. Not only this is relative easy to do by streamlining and automating all deployment tasks with a service template, the service template can also be configured and deploy to different private clouds.
The secret sauce is a service template which includes all the where, what, how, and when of deploying all the resources of an intended application as a service. It should be apparent that the skill sets and amount of efforts to develop a solid service template apparently are not trivial. Because a service template not only needs to include the intimate knowledge of an application, but the best practices of Windows deployment in addition to system and network administrations, server app-v, and system management of Windows servers and workloads. The following is a sample service template of StockTrader imported into VMM 2012 and viewed with Designer where StockTrader is a sample application for cloud deployment downloaded from Windows Connect.
Here are the logical steps I follow to deploy StockTrader with VMM 2012 admin console:
A successful deployment of Stock Trader with minimal instances in my all-in-one-laptop demo environment (running in Lenovo W510 with sufficient RAM) took about 75 to 90 minutes as reported in Job Summary shown below.
Once the service template is successfully deployed, Stock Trader becomes a service in the target private cloud supported by VMM 2012 fabric. The following two screen captures show a Pro Release of Stock Trader deployed to a private cloud in VMM 2012 and the user experience of accessing a trader’s home page.
Not If, But When
Witnessing the way the IT industry has been progressing, I envision that private cloud will soon become, just like virtualization, a core IT competency and no longer a specialty. While private cloud is still a topic that is being actively debated and shaped, the upcoming release of VMM 2012 just in time presents a methodical approach for constructing private cloud based on a service-based deployment with fabric. It is a high-speed train and the next logical step for enterprise to accelerate private cloud adoption.
I here forecast the future is mostly cloudy with scattered showers. In the long run, I see a clear cloudy day coming.
Be ambitious and opportunistic is what I will encourage everyone. When it comes to Microsoft private cloud, the essentials are Windows Server 2008 R2 SP1 with Hyper-V and VMM 2012. And those who first master these skills will stand out, become the next private cloud subject matter experts, and lead the IT pro communities. While recognizing private cloud adoption is not a technology issue, but a culture shift and an opportunity of career progression, IT pros must make a first move.
In an upcoming series of articles tentatively titled “Deploying StockTrader as Service to Private Cloud with VMM 2012,” I will walk through the operations of the above steps and detail the process of deploying a service template to a private cloud.
Aside from public cloud, private cloud, and something in between, the essence of cloud computing is fabric. The 2nd article of this 5-part series is to annotate the concept and methodology of forming a private cloud fabric with VMM 2012. Notice that throughout this article, I use the following pairs of terms interchangeably:
And this series includes:
Fabric in Windows Azure Platform: A Simplistic, Yet Remarkable View of Cloud In cloud computing, fabric is a frequently used term. It is nevertheless not a product, nor a packaged solution that we can simply unwrap and deploy. Fabric is an abstraction, an architectural concept, and a state of manageability to conceptually denote the ability to discover, identify, and manage the lifecycle of instances and resources of a service. In an oversimplified analogy, fabric is a collection of hardware, software, wiring, configurations, profiles, instances, diagnostics, connectivity, and everything else that all together form the datacenter(s) where a cloud is running. While Fabric Controller (FC, a terminology coined by Windows Azure Platform) is also an abstraction to signify the ability and designate the authority to manage the fabric in a datacenter and all intendances and associated resources supported by the fabric. As far as a service is concerned, FC is the quintessential owner of fabric, datacenters, and the world, so to speak. Hence, without the need to explain the underlying physical and logical complexities in a datacenter of how hardware is identified and allocated, how a virtual machine (VM) is deployed to and remotely booted form bare-metal, how application code is loaded and initialized, how a service is started and reports its status, how required storage is acquired and allocated, and on and on, we can now summarize the 3,500-step process, for example, to bring up a service instance in Windows Azure Platform by virtually saying that FC deploy a service instance with fabric. Fundamentally a PaaS user expects is a subscribed runtime (or “platform” as preferred) environment is in place so cloud applications can be developed and run. And for an IaaS user, it is the ability to provision and deploy VMs on demand. How a service provider, in a private cloud setting that normally means corporate IT, makes PaaS and IaaS available is not a concern for either user. As a consumer of PaaS or IaaS, this is significantly helpful and allows a user to focus on what one really cares, which is a predictable runtime to develop applications and the ability to provision infrastructure as needed, respectively. In other words, what happens under the hood of cloud computing is collectively abstracted and gracefully presented to users as “fabric.” This simplicity brings so much clarity and elegance by shielding extraordinary, if not chaotic, technical complexities from users. The stunning beauty unveiled by this abstraction is just breathtaking.
Fabric Concept and VMM 2012
Similar to what is in Windows Azure Platform, fabric in VMM 2012 is an abstraction to hide the underlying complexities from users and signify the ability to define and resources pools as a whole. This concept is explicitly presented in the UI of VMM 2012 admin console as shown here on the right. There should be no mystery at all what is fabric of a private cloud in VMM 2012. And a major task in the process of building a private cloud is to define/configure this fabric using VMM 2012 admin console. Specifically, there are 3 definable resource pools:
Clearly the magnitude and complexities are not on the same scale comparing the fabric in Windows Azure Platform in public cloud and that in VMM 2012 in private cloud. Further there are also other implementation details like replicating FC throughout geo-disbursed fabric, etc. not covered here to complicate the FC in Windows Azure Platform even more. The ideas of abstracting those details not relevant to what a user is trying to accomplish are nevertheless very much the same in both technologies. In a sense, VMM 2012 is a FC (in a simplistic form) of the defined fabric consisting of Servers, Networking, and Storage pools. And in these pools, there are functional components and logical constructs to collectively constitute the fabric of a private cloud.
This pool embodies containers hosting the runtime execution resources of a service. Host groups contains virtualization hosts as the destinations where virtual machines can be deployed based on authorization and service configurations. Library servers are the repositories of building blocks like images, iso files, templates, etc. for composing VMs. To automatically deploy images and boot a VM from bare-metal remotely via networks, pre-boot execution environment (PXE) servers are used to initiate the operating system installation on a physical computer. Update servers like WSUS are for servicing VMs automatically and based on compliance policies. For interoperability, VMM 2012 admin console can add VMware vCenter Servers to enable the management of VMware ESX hosts. And of course, the consoles will have visibility to all authorized VMM servers which forms the backbone of Microsoft virtualization management solution.
In VMM 2012, the Networking pool is where to define logical networks, assign pools of static IPs and MAC addresses, integrate load balancers, etc. to mash up the fabric. Logical networks are user-defined groupings of IP subnets and VLANs to organize and simplify network assignments. For instance, HIGH, MEDIUM, and LOW can be the definitions of three logical networks such that real-time applications are connected with HIGH and batch processes with LOW based based on specified class of service. Logical networks provide an abstraction of the underlying physical infrastructure and enables an administrator to provision and isolate network traffic based on selected criteria like connectivity properties, service-level agreements (SLAs), etc. By default, when adding a Hyper-V host to a VMM 2012 server, VMM 2012 automatically creates logical networks that match the first DNS suffix label of the connection-specific DNS suffix on each host network adapter.
In VMM 2012, you can configure static IP address pools and static MAC address pools. This functionality enables you to easily allocate the addresses for Windows-based virtual machines that are running on any managed Hyper-V, VMware ESX or Citrix XenServer host. This feature gives much room for creativities in managing network addresses. VMM 2012 also supports adding hardware load balancers to the VMM console, and creating associated virtual IP (VIP) templates which contains load balancer-related configuration settings for a specific type of network traffic. Those readers with networking or load-balancing interests are highly encouraged to experiment and assess the networking features of VMM 2012.
With VMM 2012 admin console, an administrator can discover, classify, and provision remote storage on supported storage arrays. VMM 2012 uses the new Microsoft Storage Management Service (installed by default during the installation of VMM 2012) to communicate with external arrays. An administrator must install a supported Storage Management Initiative – Specification (SMI-S) provider on an available server, followed by adding the provider to VMM 2012. SMI-S is a storage standard for operating among heterogeneous storage systems. VMM 2012 automates the assignment of storage to a Hyper-V host or Hyper-V host cluster, and tracks the storage that is managed by VMM. Notice that storage automation through VMM 2012 is only supported for Hyper-V hosts.
Where There Is A Private Cloud, There Are IT Pros
Aside from public cloud, private cloud, and something in between, the essence of cloud computing is fabric. And when it comes to a private cloud, it is largely about constructing/configuring fabric. VMM 2012 has laid it all out what fabric is concerning a private cloud and a prescriptive guidance of how to build it by populating the Servers, Networking, and Storage resource pools. I hope it is clear at this time that, particularly for a private cloud, forming fabric is not a programming commission, but one relying much on the experience and expertise of IT pros in building, operating, and maintaining an enterprise infrastructure. It’s about integrating IT tasks of building images, deploying VMs, automating processes, managing certificates, hardening securities, configuring networks, setting IPsec, isolating traffic, walking through traces, tuning performance, subscribing events, shipping logs, restoring tables, etc., etc., etc. with the three resource pools. And yes, it’s about what IT professionals do everyday to keep the system running. And that brings us to one conclusion.
Private cloud is the future of IT pros. And let the truth be told “Where there is a private cloud, there are IT pros.”
This is the first article of a 5-part series examining the key architectural concepts and relevant operations of private cloud based on VMM 2012 including:
VMM, a member of Microsoft System Center suite, is an enterprise solution for managing policies, processes, and best practices with automations by discovering, capturing and aggregating knowledge of virtualization infrastructure. In addition to the system requirements and the new features and capabilities, there are specific concepts presented in this article, although fundamental, nevertheless important to know before building a private cloud solution with VMM 2012. This blog series also assume a reader has a basics understanding of cloud computing. For those not familiar with cloud computing, I recommend first acquiring the baseline information form: my 6-part series, NIST definition, Chou’s 5-3-2 Principle, and hybrid deployment.
Private Cloud in VMM 2012
Private cloud is a “cloud” which is dedicated to an organization ,hence private. Notice that the classification of private cloud or public cloud is not based on where a service is run or who owns the employed hardware. Instead, the classification is based on whom, i.e. the users, that a cloud is intended to serve. Which is to say that deploying a cloud to a company’s hardware does not automatically make it a private cloud of the company’s. Similarly a cloud hosted in hardware owned by a 3rd party does not make it a public cloud by default either.
Nevertheless, as far as VMM 2012 is concerned, a private cloud is specifically deployed with an organization’s own hardware, provisioned and managed on-premises by the organization. VMM 2012 succeeding VMM 2008 R2 represents a significant leap in enterprise system management and acts as a private cloud enabler to accelerate transitioning enterprise IT from an infrastructure-focused deployment model into a service-oriented user-centric, cloud-ready and cloud-friendly environment, as a reader will learn more of the capabilities of VMM 2012 throughout this series. The best way to evaluate VMM 2012, download and try it yourself.
And There Is This Thing Called “Fabric’
The key architectural concept of private cloud in VMM 2012 is the so-called fabric. Similar to what is in Windows Azure Platform, fabric in VMM 2012 is an abstraction layer to shield the underlying technical complexities and denote the ability to manage defined resources pools of compute (i.e. servers), networking, and storage in the associated enterprise infrastructure. This concept is explicitly presented in the UI of VMM 2012 admin console as shown here on the right. With VMM 2012, an organization can create a private cloud from Hyper-V, VMware ESX, and Citrix XenServer hosts and realize the essential attributes of cloud computing including self-servicing, resource pooling, and elasticity.
Service in VMM 2012
One noticeable distinction of VMM 2012 compared with previous versions of VMM and other similar system management solutions is, in addition to deploying VMs, the ability to roll out a service. I have taken various opportunities in my previous blogs emphasizing the significance of being keen on what is a service and what is cloud to fully appreciate the business values brought by cloud computing. The term, service, is used often indiscreetly to explain cloud and without a grip on what is precisely a service, cloud can indeed be filled with perplexities.
Essentially, the concept of a service in cloud computing is “capacity on demand.” So delivering a service is to provide a business function which is available on demand, i.e. ideally with an anytime, anywhere, and any device access. In private cloud, this is achieved mainly by a combination of self-servicing model, management of resource pooling, and rapid elasticity which are the 3 of the 5 essential characteristics of cloud computing. Specific to private cloud, the 2 other characteristics, i.e. broad access to and chargeback business models in the service (or simply the application since in the context of cloud computing, an application is delivered as a service) are non-essential since in a private setting an organization may not want to offer broad access to a service and a chargeback model may not always be applicable or necessary as already discussed elsewhere.
Particularly, a service in VMM 2012 is implemented by a set of virtual machines (VMs) working together to collectively deliver a business function. To deploy a service in VMM 2012 is therefore to roll out a set of VMs as a whole, as opposed to individually VMs. Managing all the VMs associated with a service as an entity, i.e. a private cloud, has its advantages and at the same time introduces opportunities and challenges as well for better delivering business values. Service Template is an example.
An exciting feature of VMM 2012 is the introduction of a service template, a set of definitions capturing all configuration settings for a single release of a service. As a new release of a service is introduced due to changes of the application, settings, or VM images, a new service template is as well developed. With a service template, a cloud administrator can deploy a service which consists of a set of VMs that are multi-tiered and possibly with multiple VM instances in individual tiers based on the service configuration. For instance, instead of deploying individual VMs, using a service template in VMM 2012 IT can now deploy and manage a typical web-based application with web frontends, business logic in a middle tier, and a database backend as a single service.
Private Cloud It Is
VMM 2012 signifies a milestone for enterprise IT to actually have a solution to operate like a service provider. As VMM 2012 soon to be released, IT as a service is becoming a reality. And while some IT professionals are concerning that cloud may take away their jobs, I am hoping on the contrary as reading through this series one will realize the energy and excitements cloud computing has already brought into our IT industry and broadened careers. I believe private cloud is as yet the greatest thing happens to IT. Every time anticipations and curiosities arise as I start envisioning so many possibilities IT can do with private cloud. It is inspiring to witness cloud computing coming true and be part of it. And I can’t help imagining an IT pro greases up hair, walking down the hall way of some datacenter, and shouting out….
I solve my problems and I see the light We gotta plug and think, we gotta feed it right There ain't no danger we can go too far We start believing now that we can be what we are Cloud is the word It's got groove, it's got meaning
I solve my problems and I see the light We gotta plug and think, we gotta feed it right There ain't no danger we can go too far We start believing now that we can be what we are Cloud is the word It's got groove, it's got meaning
[To Part 2, 3, 4 ,5]
As of August, 2011, US NIST has published a draft document (SP 800-145) which defines cloud computing and outlines 4 deployment models: private, community, public, and hybrid clouds. At the same time, Chou has proposed a leaner version with private and public as the only two deployment models in his 5-3-2 Principle of Cloud Computing. The concept is illustrated below.
Regardless how it is viewed, cloud computing characterizes IT’s capabilities with which a set of authorized resources can be abstracted, managed, delivered, and consumed as a service, i.e. with capacity on demand, without the concerns of underlying infrastructure. Amid a rapid transformation from legacy infrastructure to a cloud computing environment, many IT professionals remain struggling in better understanding what is and how to approach cloud. IT decision makers need to be crisp on what are private cloud and public before developing a roadmap for transitioning into cloud computing.
Private Cloud and Public Cloud
Private cloud is a “cloud” which is dedicated, hence private. As defined in NIST SP 800-145, private cloud has its infrastructure operated solely for an organization, while the infrastructure may be managed by the organization or a third party and may exist on premise or off premise. By and large, private cloud is a pressing and important topic, since a natural progression in datacenter evolution for the post-virtualization-era enterprise IT is to convert/transform existing establishments, i.e. what have been already deployed, into a cloud-ready and cloud-enabled environment, as shown below.
NIST SP 800-145 points out that public cloud is a cloud infrastructure available for consumers/subscribers and owned by an organization selling cloud services to the public or targeted audiences. Free public cloud services like Hotmail, Windows Live, SkyDrive, etc. and subscription-based offerings like Office 365, Microsoft Online Services, and Windows Azure Platform are available in Internet. And many simply refer Internet as the public cloud. This is however not entirely correct since Internet is in generally referenced as connectivity and not necessary a service with the 5 essential characteristics of cloud computing. In other words, just because it is 24x7 accessible through Internet does not make it a cloud application. In such case, cloud computing is nothing more than remote access.
Not Hybrid Cloud, But Hybrid Deployment
According to NIST SP 800-145, hybrid cloud is an infrastructure of a composition of two or more clouds. Here these two or more clouds are apparently related or have some integrated or common components to complete a service or form a collection of services to be presented to users as a whole. This definition is however vague. And the term, hybrid cloud, is extraneous and adds too few values. A hybrid cloud of a corporation including two private clouds from HR and IT, respectively, and both based on corporate AD for authentication is in essence a private cloud of the corporation, since the cloud as a whole is operated solely for the corporation. If a hybrid cloud consists two private clouds from different companies based on established trusts, this hybrid cloud will still be presented as a private cloud from either company due to the corporate boundaries. In other words, a hybrid cloud of multiple private clouds is in essence one logical private cloud. Similarly a hybrid cloud of multiple public clouds is in essence a logical public cloud. Further, a hybrid cloud of a public cloud and a private cloud is either a public cloud when accessing from the public cloud side or a private cloud from the private cloud side. It is either “private” or “public.” Adding “hybrid” only confuses people more.
Nevertheless, there are cases in which a cloud and its resources are with various deployment models. I call these hybrid deployment scenarios including:
I have previously briefly talked about some hybrid deployment scenarios. In upcoming blogs, I will walk through the architectural components and further discuss either scenarios.
A few interesting observations I have when classifying cloud computing. First, current implementation of cloud computing relies on virtualization and a service is relevant only to those VM instances, i.e. virtual infrastructure, where the service is running. Notice that the classification of private cloud or public cloud is not based on where a service is run or who owns the employed hardware. Instead, the classification is based on whom, i.e. the users, that a cloud is operated/deployed for. In other words, deploying a cloud to a company’s hardware does not automatically make it a private cloud of the company’s. Similarly a cloud hosted in hardware owned by a 3rd party does not make it a public cloud by default either.
Next, at various levels of private cloud IT is a service provider and a consumer at the same time. In an enterprise setting, a business unit IT may be a consumer of private cloud provided by corporate IT, while also a service provider to users served by the business unit. For example, the IT of an application development department consumes/subscribes a private cloud of IaaS from corporate IT based on a consumption-based charge-back model at a departmental level. This IT of an application development department can then act as a service provider to offer VMs dynamically deployed with lifecycle management to authorized developers within the department. therefore, when examining private cloud, we should first identify roles, followed by setting proper context based on the separation of responsibilities to clearly understand the objectives and scopes of a solution.
Finally, community cloud as defined in NIST SP 800-145 is really just a private cloud of a community since the cloud infrastructure is still operated for an organization which now consists a community. This classification in my view appears academic and extraneous.
With the introduction of Windows Azure Connect, many options for an on-premises application to integrate with or migrate to cloud at an infrastructure level are available. The integration and migration opportunities will become apparent by examining how applications are architected for on-premises and cloud deployments. These concepts are profoundly important for IT pros to clearly identify, define, and apply while expanding the role and responsibilities into a cloud or service architect. In Part 2, let’s first review computing models before making cloud computing a much more exciting technical expedition with Windows Azure Connect.
Then Traditional 3-Tier Application Architecture
Based on a client-server model, the traditional n-tier application architecture carries out a business process in a distributed fashion. For instance, a typical 3-tier web application as shown below includes:
When deployed on premises, IT has physical access the entire infrastructure and is responsible for all aspects in the lifecycle including configuration, deployment, security, management, and disposition of resources. This had been a deployment model upon which theories, methodologies, and practices have been developed and many IT shops operated. IT controls all resources and at the same time is responsible for the end-to-end and distributed runtime environment of an application. Frequently, to manage an expected high volume of incoming requests, load-balancers which are expensive to acquire and expensive to maintain are put in place associated with the front-end of an application. To improve data integrity, clusters which are expensive to acquire and, yes, expensive to maintain are configured at the back-end. Not only load-balancer and clusters increase the complexities and are technically challenging with skillsets hard to acquire, but both fundamentally increase the capital expenses and the operational costs throughout the lifecycle of a solution and ultimately the TCO.
Now State-of-the-Art Windows Azure Computing Model
Windows Azure Platform is Microsoft’s Platform as a Service, i.e. PaaS solution. And PaaS here means that an application developed with Windows Azure Platform (which is hosted in data centers by Microsoft around the world) is by default delivered with Software as a Service, or SaaS. A quick review of the 6-part Cloud Computing for IT Pros series, one will notice that I have already explained the computing concept of Windows Azure (essentially Microsoft's cloud OS) in Computing Model and Fabric Controller. Considering Windows Azure computing model, Web Role is to receive and process incoming HTTP/HTTPS requests from a configured public endpoint, i.e. a web front-end with an internet-facing URL specified during publishing an application to Windows Azure. A Web Role instance is deployed to a (Windows Server 2008 R2) virtual machine with IIS. And the Web Role’s instances of an application are automatically load-balanced by Windows Azure. On the other hand, Worker Role is like a Windows service or batch job, which starts by itself and is the equivalent middle-tier where business logic and back-end connectivity stay in a traditional 3-tier design. And a Worker Role instance is deployed with a virtual machine without IIS in place. The following schematic illustrates the conceptual model.
VM Role is a definition allowing a virtual machine (i.e. VHD file) to be uploaded and run with Windows Azure Compute service. There are some interesting points of VM Role. Supposedly based on separation of responsibilities, in PaaS only Data and Application layers are managed by consumers/subscribers while Runtime layer and below are controlled by a service provider which in the case of Windows Azure Platform is Microsoft. Nevertheless, VM Role in fact allows not only Data and Application, but also Runtime, Middleware, and OS layers all accessible in a virtual machine controlled by a subscriber of Windows Azure Platform which is by the way a PaaS and not IaaS offering. This is because VM Role is designed for addressing specific issues, and above all IT pros need to recognized that it is intended as a last resort. Information of why to employ VM Role and how is readily available elsewhere, and not repeated here.
So, with Windows Azure Platform, the 3-tier design is in fact very much applicable. The Windows Azure design pattern employs Web Role as a front-end to process incoming requests as quickly as possible, while Worker Role as a middle-tier to do most of the heavy lifting, namely execute business logic against application data. The communications between Web Role and Worker Role is with Windows Azure Queue and detailed elsewhere.
With Visual Studio and Windows Azure SDK, the process of developing a Windows Azure application is highly transparent to that of an on-premise application. And the steps to publish a Visual Studio cloud project are amazingly simple to simply uploading two files to Windows Azure Platform Management Portal. The two files are generated when publishing an intended cloud project in Visual Studio. They are a zipped package of application code and a configuration file with cspkg and cscfg file extensions, respectively. The publishing process can be further hardened with certificate for higher security.
Compared with on-premises computing, there are noticeable constraints when deploying application to cloud including:
These constraints are related to enabling system management of resource pooling and elasticity which are part of the essential characteristics of cloud computing.
Two important features, high availability and fault tolerance, are automatically provided by Windows Azure. Which can significantly reduce the TCO of an application deployed to cloud compared with that of an on-premises deployment. Here, details of how Windows Azure achieves automatic high availability and fault tolerance are not included. A discussion of this topic is already scheduled to be published in my upcoming blog post. Stay tuned.
An Emerging Application Architecture
With Windows Azure Connect, to integrate and extend a 3-tire on-premises deployment to cloud is now relatively easy to do. As part of Microsoft PaaS offering, Windows Azure Connect automatically configures IPSec connectivity to securely connect Windows Azure role instances with on-premises resources, as indicated by the dotted lines in the following schematic. Notice that those role instances and on-premises computers to be connected are first grouped. And all members in a group are exposed as a whole and at the group level the connectivity is established. With IPSec in place, a Windows Azure role instance can join and be part of an Active Directory in private network. Namely, server and domain isolation with Windows Authentication and group polices can now be applied to cloud computing resources without significant changes of the underlying application architecture. In other words, Windows security model and system management in a managed environment can now seamlessly include cloud resources, which essentially makes many IT practices and solutions directly applicable to cloud with minimal changes.
With the introduction of cloud computing, an emerging application architecture is a hybrid model with a combination of components deployed to cloud and on-premises. With Windows Azure Connect, cloud computing can simply be par of and does not necessarily encompass an entire application architecture. This allows IT to take advantages of what Windows Azure Platform is offering like automatic load balancing and high availability by migrating selected resources to cloud, as indicated with the dotted lines in the above schematic, while managing all resources of an application with consistent security model and domain policies. Either the front-end of an application is in cloud or on premises, the middle-tier and the back-end can be a combination of resources with cloud computing and on-premises deployment.
Start Now and Be What’s The Next
With Windows Azure Connect, both cloud and on-premises resources are within reach to each other. For IT pros, this reveals a strategic and urgent need to convert existing on-premise computing into a cloud-ready and cloud-friendly environment. This means, if not already, to start building hardware and software inventories, automating and optimizing existing procedures and operations, standardizing authentication provider, implementing PKI, providing federated identity, etc. The technologies are all here already and solutions readily available. For those feeling Windows Azure Platform is foreign and remote, I highly recommend familiarizing yourselves with Windows Azure before everybody else does. Use the promotion code, DPEA01 to get a free Azure Pass without credit card information. And make the first step of upgrading your skills with cloud computing and welcome the exciting opportunities presented to you.
Having an option to get the best of both cloud computing and on-premises deployment and not forced to choose one or the other is a great feeling. It’s like… dancing down the street with a cloud at your feet. And I say that’s amore.
<Back to Part 1: Concept>
Personally, I see Windows Azure Connect is a killer app to facilitate the adoption of cloud computing. For all IT pros, this is where we take off and reach out to the sky while dancing down the street with a cloud at our feet. And that’s amore.
What It Is
To simply put, Windows Azure Connect offers IPSec connectivity between Windows Azure role instances in public cloud and computers and virtual machines deployed in a private network as shown below.
Why It Matters
The IPSec connectivity provided by Windows Azure Connect enables enterprise IT to relatively easily establish trust between on-premises resources and Windows Azure role instances. A Windows Azure role instance can now join and be part of an Active Directory domain. In other words, a domain-joined role instance will then be part of a defense-in-depth strategy, included in a domain isolation, and subjected to the same name resolution, authentication scheme, and domain policies with other domain members as depicted in the following schematic.
In Windows Azure Platform AppFabric (AppFabric), there is also the so-called Service Bus offering connectivity options for Windows Communication Foundation (WCF) and other service endpoints. Both Windows Azure Connect and AppFabric are very exciting features and different approaches for distributed applications to establish connectivity with intended resources. In a simplistic view, Windows Azure Connect is set at a box-level and more relevant to sys-admin operations and settings, while Service Bus is a programmatic approach in a Windows Azure application and with more control on what and how to connect.
A Clear Cloudy Day Ahead
Ultimately, Windows Azure Connect offers a cloud application a secure integration point with private network resources. At the same time, for on-premises computing Windows Azure Connect extends resources securely to public cloud. Both introduce many opportunities and interesting scenarios in a hybrid model where cloud computing and on-premises deployment together form an enterprise IT. The infrastructure significance and operation complexities at various levels in a hybrid model enabled by Windows Azure Connect bring excitements and many challenges to IT pros. What a great development in cloud computing. And I realize there’s indeed a place in the world for a gambler and where skies are blue and dreams do come true.
<Next to Part 2: Application Integration/Migration Model>
When it comes to cloud security, many times I have heard people simply claim it is not secure, yet fail to give specifics. And consequently all too often a cloud security discussion soon turns into a religious or linguistic debate, instead of focusing on what the concerns are and how to address them. Another interesting observation is that somehow an assumption seems fundamentally put in place is that if it is not compliant, it is not secure. Which is incorrect as explained later. This blog examines a few important concepts and strategies to better understand how to approach cloud security in general.
Compliance vs. Security
In cloud computing, we must recognize that security and compliance are two topics and not necessarily consequential. There are some scenarios that perhaps cloud computing is not able to become directly complaint due to an inability to provide all required security specifics. This however does not necessarily suggest cloud computing is not secure. For instance, a customer may demand an affinity or some predictability of an application and the physical server that the application is running upon. This is a fundamental disruption in cloud computing. Notice one of the 5 characteristics of cloud computing is resource pooling so that resources can be identified, allocated, monitored, managed, and de-allocated dynamically and on demand while providing high availability and location transparency of service instances, which is a necessary condition for offering elasticity (also one of the 5 characteristics) with current technologies. Resource pooling means upon which server a cloud application instance will run is based upon the availability of a targeted resource in an intended pool at the time of allocating. To specify on which servers an application can run will abolish the ability to sustain high availability and on-demand capacity of a running instance. By default, cloud computing can not and should not offer affinity of hardware and a running instance. Does this mean cloud computing is not secure? The answer is “Huh?” since compliance and security are here two different matters.
Context and Scenario
Cloud is a broad topic and adds a few layers of abstraction. So be specific on an examined topic. Reference the 5-3-2 principle and consider separations of responsibilities to set the context and describe the scenario that you believe security may be an issue.
One should first answer the above questions to make certain an issue is relevant to specifically cloud computing and if a consumer or a service provider is responsible. If it is not cloud computing specific, it should not be discussed as a cloud computing issue. The abstractions of cloud computing all too often confuse people and complicate an issue more. If one is able to discover what it is, how it can happen, who is responsible, and if it is a cloud specific issue, there is a great opportunity that a solution will service itself.
Notice that a key enabler of cloud commuting is virtualization. And cloud security is conceptually not that much different from security considerations for virtualization and on-premises computing in general. There are various layers in cloud computing, as highlighted in the schematic on the left, and defense in depth is directly applicable and a best practice. In on-premises computing, corporate IT has control over all layers. Now in cloud computing, depending on which delivery method and deployment model in cloud computing, there is a separation of responsibilities among a service provider and consumers, and resources under certain layer are owned and managed by a service provider. For instance, a service provider will manage all layers in SaaS. So a user does not need to know where and how the system is maintained and managed, other than the URL of the subscribed service and an authorized account to use the service. Microsoft Office 365 and Online Services are SaaS offerings. And both offer customers enterprise email, collaboration, and unified communications capabilities without the need to own IT infrastructure which encompasses all layers as shown. Which also means a subscriber will have no control over any layer. Meanwhile, in PaaS a user will have control on Applications and Data layers, but not those below. Microsoft Windows Azure is a PaaS example, provides an environment for development, deployment, and management, and enable IT to code/test/publish/manage a cloud application delivered with SaaS in public cloud. It is a very powerful, efficient, and strategic platform that cloud applications can be developed, deployed, and managed highly transparent with on-premises establishments with IPSec connectivity. The IPSec connectivity can be easily achieved upon the availability of Windows Azure Connect. In IaaS, layers above virtualization are managed by a subscriber. Namely a customer now has the responsibilities to harden and patch OS as well as all applications and services running in a virtual machine deployed by IaaS. Microsoft’s IaaS solutions are focused much on private cloud. For many, the concept of IaaS remains a bit remote and foreign. The good news is that with the upcoming release of System Center 2012, building and deploying a private cloud will be a relatively straightforward and easy process. Expect a few of my upcoming blogs to examine some of the key concepts on Windows Azure Connect and System Center 2012.
It’s About Trust
Either on premises or in cloud, at some point you just have to start trusting whoever is going to provide the service. If one thinks about it deep enough, it should become apparent that trusting is one of the root issue on cloud security. Will you trust someone to keep you data? No? Looking around and think again. We all have in fact already been trusting many others in carrying out our everyday business. We trust our Exchange admins whoever they are to run our email and inspect our inboxes with or without a notice, internet service provider to route our messages and connect us with customers and partners, couriers to deliver our confidential packages among branch offices, etc. Hosting applications and data is certainly serious and critical to business. However not all data are confidential and must be in a vault guarded with only employees. What needs to happen first is to examine data relevant to business and identify those which absolutely can not be off premises. Then assess if it makes sense to go to cloud with those data basically can be outsourced.
From a cloud computing consumer’s point of view, in addition to establishing best practices on those resources within one’s control, the ultimate questions are the trustworthiness of a service provider and if a consumer can trust someone else to host one’s data, application, and infrastructure, as applicable. This question is rudimentary and a key concept towards employing IT as a service.
This blog post lists out terms frequently referenced in Windows Azure Platform. They are presented in a hierarchical order based on the context shown in the following schematic. Each term is described concisely with key concept and pertinent information. The content is intended for IT pros and non-programmers.
A collective name of Microsoft’s Platform as a Service (PaaS) offering which provides a programming platform, a deployment vehicle, and a runtime environment of cloud computing hosted in Microsoft datacenters
Essentially Microsoft cloud OS which provides abstractions and shields the complexities of implementing and managing collections of hardware, software, and instances
A Windows Azure service for executing application code based on a specified role including web role, worker role, and VM role
A service definition to deploy a VM with IIS 7 for hosting a web application
A service definition to deploy a VM without IIS for running application code in the background similar to Windows processes, batch jobs, or scheduled tasks
A service definition to upload a VM to cloud (i.e. Windows Azure Platform) for deploying an application with a custom or predictable runtime environment and provided as a last resort for addressing issues including:
A Windows Azure service for allocating persistent and durable storage accessible with HTTP/HTTPS (REST) and .NET
Binary Large Object for storing large data items like text and binary data
Structured storage in the form of tables which store data as collections of entities for maintaining service state
A page BLOB and formatted as a single-volume NTFS virtual hard drive to be mounted within a Windows Azure role instance and accessed like a local drive
Non-persistent storage local to a role instance
Owner of datacenter including hardware, software, and instances and ultimately the brain of the cloud OS
A self-initialized application deployed with the root partition of a Windows Azure Compute node to form the fabric
A self-initialized application deployed with the base image of a Guest OS to form the fabric
A user interface to configure IPsec protected connections between computers or virtual machines (VMs) in an organization’s network, and roles running in Windows Azure
An add-on feature to Windows Azure subscription to cache Windows Azure BLOBs and the static content output of Compute instances at Microsoft’s caching servers near what the content is most frequently accessed
A cloud-based relational database service with SQL Azure Reporting, a report generating service
To provide secure messaging and connectivity capabilities through firewalls, NAT gateways, and other problematic network boundaries and enable building distributed and disconnected applications in the cloud, as well hybrid application across both on-premise and the cloud
A hosted service providing federated authentication and rules-driven, claims-based authorization for REST Web services with integration with Windows Identity Foundation (WIF) like Active Directory Federation Services (ADFS) v2
A subset of the on-premise distributed caching solution, Windows Server AppFabric Caching, for provisioning a cache in cloud to be used with ASP.NET or client applications for caching requirements
Capabilities similar to those of Biz-Talk to integrate Windows Azure Platform applications with existing LOB and databases and third-party Software as a Service (SaaS) applications
For building applications with a composite of services in the cloud and on premises, components, web services, workflows, and existing applications
The first business to understand cloud computing is to know what the term, service, means since it has been used autonomously and extensively to explain cloud technologies. Service in the context of cloud computing means “capacity on demand” or simply “on demand.” Notice that on-demand here also implies real-time response and ultimately with anytime, anywhere, and any device accessibility. The idea is straightforward. Basically, as a service bell is ringed, the requested resources are magically made available. So, IT as a Service means IT on demand. And now it should be apparent what news as a service, catering as a service, or simply my business as a service means. And we can clearly explain the three cloud computing delivery methods. SaaS means software on demand; simply an application can be readily available for an (authorized) user. PaaS offers a programming environment (or platform) enabling the development and delivery of SaaS. And IaaS empowers a user with the ability to provision infrastructure, i.e. deploy servers with virtual machines, on demand. Further, at an implementation level with current technologies, cloud computing also destines that virtualization (namely an abstraction of the underlying complexities of topology, networking, monitoring, management, etc. from provided services) is put in place. Such that a user can consume or acquire SaaS, PaaS, and IaaS without the need to own and deploy the required hardware, reconfigure the cabling, and so on.
The term, cloud, regardless public, private, and everything in between means the 5-3-2 principle of cloud computing (see above) is applicable. The 5 characteristics listed in the 5-3-2 principle are the criteria to differentiate a cloud from a non-cloud application, and also concisely outline the benefits of cloud computing. This recognition is the essence of cloud computing. And in my view, much of the confusion in cloud computing discussion has been due to lack of an understanding of the 5-3-2- principle. For instance, many are confused about and mistakenly consider remote access or anything via Internet as cloud computing. This assumption is incorrect and inconclusive. My rule of thumb is that those exhibiting the 5 characteristics are cloud applications and those who don’t are not. Above all, the 5-3-2 principle, or more specifically NIST Definition of Cloud Computing, scopes the subject domain of cloud computing with current technologies and presents a definition that is structured, disciplined, and with clarity.
So public cloud is a cloud and the 5-3-2 principle applies. The term, public, in the context of cloud computing, refers to Internet, general availability, and for subscription when applicable. Windows Live and Hot Mail for example, are Microsoft SaaS offerings in public cloud for consumers, while Office 365, Microsoft Online Services, and Microsoft Dynamics CRM Online are for businesses. They all are cloud applications because:
The 5 characteristics exhibiting in the above mentioned cloud applications are vivid and without ambiguity.
At the same time, private cloud is also a cloud and dedicated, hence private, to an organization. As explained in Highly Virtualized Computing vs. Private Cloud, ubiquitous access and pay-as-you-go model may not be essential in private cloud. Still the applicability of the 5-3-2 principle to all cloud applications including private cloud should be very clear here. So, for example, the reasoning to answer the following question is actually straightforward.
First, with the 5-3-2 principle, we can easily determine if an application is a cloud applications. Then the strategy is to discover which of the 5 characteristics are missing and how relevant they are to the business requirements. For instance:
And it is certainly up to an organization to decide how critical the 5 characteristics are and if all or selected ones are applicable to a targeted delivery. The lesson here is not necessarily an academic debate if a particular feature like self-service should be a requirement of private cloud. The crucial element is to have a predictable way (namely the 5 characteristics) to identify what are relevant to business requirements.
One interesting observation of cloud computing is that many seem having some understanding, yet few with a complete picture since this is a subject touching very much every aspect of IT. Many can highlight some points of cloud computing, yet few with a structured and disciplined approach of explaining cloud computing since cloud computing is a very complex proposal on both business and technical sides. I believe a productive way to discuss cloud computing is to focus on the fundamentals, and have a clear understanding of what cloud is about and why, before framing it with a particular business or implementation. Employ the 5-3-2 principle to organize the message and describe cloud computing with your own words. You will find out that once grasped the concept, you can navigate through a cloud computing conversation with clarity, substance, and productivity.
If cloud computing is not confusing enough, there is also this so called private cloud. And what is private cloud? I am hoping at this time you have reviewed my Cloud Computing for IT Pros series and have a clear understanding of what a service is and what cloud computing is. These are key concepts. And equally important, you know the 5-3-2 Principle of Cloud Computing and why an application is a cloud application while others may not. Generally speaking, there are 5 essential characteristics, 3 delivery methods, and 2 deployment models (or 4 if following NIST definition) in cloud computing. Does not matter it is public cloud or private cloud. If it is classified as cloud computing, it should at least exhibit the 5-3-2 principle as the core set of attributes. With that in mind, so what is private cloud?
Private cloud? Well, it is a cloud, so the 5 essential characteristics of cloud computing apply. The term, private, here means dedicated and a private cloud is a cloud dedicated to an organization. The classification here is based on the intended users and not the ownership of the infrastructure. Namely, an organization has a dedicated cloud does not necessarily mean the organization must own the infrastructure on which a dedicated cloud is running. A obvious example is a private cloud running on an infrastructure owned and managed by a 3rd party hosting company. So a subscribing company may possibly own the data, software, configurations, and instances, but not the physical boxes and underlying infrastructure. To find out more of running private cloud in this fashion, a list of private cloud hosting companies is readily available.
Perhaps a more commonly assumed definition of private cloud is an on-premises deployment of cloud computing. In other words, all including the servers, cabling, software, running instances, etc. are owned and managed by an organization behind its enterprise firewall, as shown above. Many enterprises assume this definition of private cloud due to an existing deployment of on-premises IT resources. While transitioning into private cloud, it is a logical step to build one by employing already deployed hardware and software.
Ultimately cloud computing is to better deliver applicaitons. The goal of constructing a private cloud can be acquiring IaaS, PaaS, or SaaS. Based on the objectives, an organization, for example, may simply seek the ability to efficiently deploy/manage servers to provide maximal flexibility for develoying and testing applicaitons, and in this case IaaS is what and all the organization needs. While the servers are deployed via IaaS, applications running within these servers do not have to be cloud applications. The applications can very well be traditional (i.e. non-cloud computing) ones. The point is that to pursue a private cloud, it is not necessarily to acquire all three (IaaS, PaaS, and SaaS) delivery methods. Nevertheless, for enterprise it is only logical to start with IaaS to fundamentally and strategically convert existing IT establishments into a cloud-ready environment. For pursuing a private cloud, IT should have IaaS in place first which will fundamentally provide the mechanism for resource pooling, scalability, and elasticity.
Microsoft private cloud solutions is called Hyper-V Cloud. Which is a set of guidelines as shown here on the right and offerings on building private cloud with IaaS using readily available technologies, i.e. Windows Server 2008 R2 and System Center Virtual Machine Manager. Hyper-V Cloud is exciting since not only it increase the ROI on existing deployment, it also strategically places a foundation to integrate Windows Azure platform offered in public cloud. Ultimately, enterprise will be able to manage physical, virtualized, and cloud (private and public) with a single pane of glass provided by System Center.
Above all, it does not matter if the delivery method is IaaS, PaaS, or SaaS. As far as a user is concerned, whatever your service/application is, it is always SaaS even if your application is not cloud-based. Application is what this is all about. So when it comes to implement private cloud which will eventually change how your IT delivers services, it is an expensive proposition on both cost and customer satisfaction. Be clear on short-term checkpoints and long-term business goals. Scope down but be very strategic in overall implementation.
Virtualization vs. private cloud has confused many IT pros. Are they the same? Or different? In what way and how? We have already virtualized most of my computing resources, is a private cloud still relevant to us? These are questions I have been frequently asked. Before getting the answers, in the first article of the two-part series listed below I want to set a baseline.
Lately, many IT shops have introduced virtualization into existing computing environment. Consolidating servers, mimicking production environment, virtualizing test networks, securing resources with honey pots, adding disaster recovery options, etc. are just a few applications of employing virtualization. Some also run highly virtualized IT with automation provided by system management solutions. I imagine many IT pros recognize the benefits of virtualization including better utilization of servers, associated savings by reducing the physical footprint, etc. Now we are moving into a cloud era, the question then becomes “Is virtualization the same with a private cloud?” or “We are already running a highly virtualized computing today, do we still need a private cloud?“ The answers to these questions should always start with “What business problems you are trying to address?” Then assess if a private cloud solution can fundamentally solve the problem, or perhaps virtualization is sufficient. This is of course assuming there is a clear understanding of what is virtualization and what is a private cloud. This point is that virtualization and cloud computing are not the same. They address IT challenges in different dimensions and operated in different scopes with different levels of impact on a business.
To make a long story short, virtualization in the context of IT is to “isolate” computing resources such that an object (i.e. an application, a task, a component) in a layer above can be possibly operated without a concern of those changes made in the layers below. A lengthy discussion of virtualization is beyond the scope of this article. Nonetheless,let me point out that the terms, virtualization, and “isolation” are chosen for specific reasons since there are technical discrepancies between “virtualization” and “emulation”, “isolation” and “redirection.” Virtualization isolates computing resources, hence offers an opportunity to relocate and consolidate isolated resources for better utilization and higher efficiency. Virtualization is rooted in infrastructure management, operations, and deployment flexibility. It's about consolidating servers, moving workloads, streaming desktops, and so on; which without virtualization are not technically feasible or may simply be cost-prohibitive.
Cloud computing on the other hand is a state, a concept, a set of capabilities. There are statements made on what to expect in general from cloud computing. A definition of cloud computing published in NIST SP-800-145 outlines the essential characteristics, how to deliver, and what kind of deployment models to be cloud-qualified. Chou further simplifies it and offers a plain and simple way to describe cloud computing with the 5-3-2 Principle as illustrated below.
To realize the fundamental differences between virtualization and private cloud is therefore rather straightforward. In essence, virtualization is not based on the 5-3-2 Principle as opposed to cloud computing does. For instance, a self-serving model is not an essential component in virtualization, while it is essential in cloud computing. One can certainly argue some virtualization solution may include a self-serving component. The point is that self-service is not a necessary , nor sufficient condition for virtualization. While in cloud computing, self-service is a crucial concept to deliver anytime availability to user, which is what a service is all about. Furthermore, self-service is an effective mechanism to in the long run reduce training and support at all levels. It is a crucial vehicle to accelerate the ROI of a cloud computing solution and make it sustainable in the long run.
So what are specifically about highly virtualized computing environment vs. a private cloud?
For discussing cloud computing, I recommend employing the following theories as a baseline.
Theory 1: You can not productively discuss cloud computing without first clearly defining what it is.
The fact is that cloud computing is confusing since everyone seems to have a different definition of cloud computing. Notice the issue is not lack of definitions, nor the need for having an agreed definition. The issue is not having a well-thought-out definition to operate upon. And without a good definition, a conversation of cloud computing all too often becomes non-productive since cloud computing touches infrastructure, architecture, development, deployment, operations, automation, optimization, manageability, cost, and very aspect of IT. And as explained below, it is indeed a generational shift of our computing platform from desktop to cloud. Without a good baseline of cloud computing, a conversation of the subject results in nothing more than an academic exercise in my experience.
Theory 2: The 5-3-2 principle defines the essence and scopes the subject domain of cloud computing.
Employ the 5-3-2 principle as a message framework to facilitate the discussions and improve the awareness of cloud computing. The message of cloud computing itself is however up to individuals to articulate. Staying with this framework will keep a cloud conversation aligned with the business values which IT is expected to and should deliver in a cloud solution.
Theory 3: The 5-3-2 principle of cloud computing describes the 5 essential characteristics, 3 delivery methods, and 2 deployment models of cloud computing.
The 5 characteristics of cloud computing, shown below, are the expected attributes for an application to be classified as a cloud application. These are the differentiators. Questions like “I am running X, do I still need cloud?” can be clearly answered by determining if these characteristics are expected for X.
The 3 delivery methods of cloud computing, as shown below, are the frequently heard: Software as a Service, Platform as a Service, and Infrastructure as a Service, namely SaaS, PaaS, and IaaS respectively. Here, the key is to first understand “what is a service.” All 3 delivery methods are presented as services in the context of cloud computing. Without a clear understanding of what is service, there is a danger of not grasping the fundamentals as to misunderstand all the rest.
The 2 deployment methods of cloud computing are public cloud and private cloud. Public cloud is intended for public consumption and private cloud is a cloud (and notice a cloud should exhibit the 5 characteristics) while the infrastructure is dedicated to an organization. Private cloud although frequently assumed inside a private data center, as depicted below, can be on premises or hosted off premises by a 3rd party. Hybrid deployment is an extended concept of a private cloud with resources deployed on-premise and off-premise.
The 5-3-2 principle is a simple, structured, and disciplined way of conversing cloud computing. 5 characteristics, 3 delivery methods, and 2 deployment models together explain the key aspects of cloud computing. A cloud discussion is to validate the business needs of the 5 characteristics, the feasibility of delivering an intended service with SaaS, PaaS, or IaaS, and if public cloud or private cloud the preferred deployment model. Under the framework provided by the 5-3-2 principle, now there is a structured way to navigate through the maze of cloud computing and offer a direction to an ultimate cloud solution. Cloud computing will be clear and easy to understand with the 5-3-2 principle as following: