This series focusing on cloud essentials for IT professionals includes:
Compared with on-premises computing, deployment to cloud is much easier. A few clicking operations will do. And I believe this is probably why many have jumped into a conclusion that we, IT professionals, are going to lose our jobs in the cloud era. There is some, not much but some in my view, truth to it. However, not so fast, I say. Bringing cloud into the picture does dramatically reduce cost and complexities in some areas, yet at the same time cloud also introduces many new nuances which demand IT disciplines to develop criteria with business insights. No, I do not think so. IT professionals are not going to lose jobs, but to become more directly than ever affecting the bottom-line since with cloud computing everything is attached with a dollar sign and every IT operation has a direct cost implication. In this article, I discuss some routines and additional considerations for deployment to cloud. Notice through this article, in the context of cloud computing I use the terms, application and service, interchangeably.
For IT professionals, deploying a Windows Azure service to cloud starts with development hands over the gold bits and configuration file, as depicted in the schematic below.
In Visual Studio with Windows Azure SDK, a developer can create a cloud project, build a solution, and publish the solution with an option to generate a package which zips the code and in company with a configuration file defining the name and number of instance of Compute Role. This package and the configuration file are what gets uploaded into Windows Azure platform, either through the UI of Windows Azure Developer Portal or programmatically with Windows Azure API. For those not familiar with the steps and operations to develop and upload a cloud application into Windows Azure platform, I highly recommend you invest the time to walk through the labs which are well documented and readily available.
Application Lifecycle Management
In a typical enterprise deployment process, developers code and test applications locally or in an isolated environment and go through the build process to promote the code to an integrated/QA test environment. And upon finishing tests and QA, the code is promoted to production. For cloud deployment, the idea is much the same other than at some point the code will be placed in cloud and tests will be conducted through cloud before becoming live in Internet. Below is a sample of application lifecycle where both the production and the main test environments are in cloud.
Traditionally when developing and deploying applications on premises, one challenge (and it can be a big one) is to to try to keep the test environment as closely mimicking the production as possible to validate the developed code, data, operations, and procedures. Sometimes, this can be a major undertaking due to ownership, corporate politics, financial constraints, technical limitations, discrepancies between test environment and the production, etc. And the stories of applications behave as expected in test environment, yet generate numerous security and storage violations, fatal errors, and crash hard once in production are many times heard. This will be different when testing Windows Azure services. It turns out the user experience of promoting code from staging to production in cloud can be pleasant and something to look forward to.
When first promoting code into Windows Azure platform from local development/test environment, the application is placed in the so-called staging phase and when ready, an administrator can then promote the application into production. An interesting fact is that the staging and the production environment in Windows Azure platform are identical. There is no difference and Windows Azure platform is Windows Azure platform. What differentiates a staging environment from a production environment is the URL. The former is provided with an unique alphanumeric strings as part of the non-published staging URL like http://c2ek9aa346384629a3401e8119de3500.cloudapp.net/ while the latter is an administrator-specified user friendly URL like http://yc.cloudapp.net. So to promote from a staging environment to the production in Windows Azure platform, simply swap the virtual IP by swapping the staging URL with the production one. This is called VIP-Swap and two mouse-clicks are what it takes to deploy a Windows Azure service from staging to production. See the screen capture below. And minutes later, once Fabric Controller syncs with all the URL references of this application, in production it is.
VIP-swap is handy for initial deployment and subsequent updating a service with a new package. When making changes of a service, the service will be placed in staging environment and a VIP-Swap will promote the code into production. This feature however is not applicable to all changes of service definition. In such scenarios, redeployment of the service package will become necessary.
With Windows Azure platform in the cloud, there are new opportunities to extend as well as integrate on-premises computing with cloud services. An application architecture can accept HTTP/HTTPS requests either with a Web Role or a front-end that is on premises. And where to process requests and store the data can be in cloud, on premises, and a combination of the two, as shown below.
So either the service starts in cloud or on-premises, an application architect now has many options to architecturally improve the agility of an application. An HTTP request can start with on-premises computing while integrated with Worker Role in the cloud for capacity on demand. At the same time, a cloud service can as well employ the middle-tier and the back-end that are deployed on premises for security and control.
Standardizing Core Components
Either on premises or in cloud, core components including: application infrastructure, development tools, common management platform, identity mechanism, and virtualization technology should be standardized sooner than later. With a common set of technologies, see below, both IT professionals and developers can apply their skills and experiences to either computing platform. This will in the long run produce higher quality services with lower costs. This is crucial to make the transformation into cloud a convergent process.
By default, diagnostic data in Windows Azure is held in a memory buffer and not persistent. To access log data in Windows Azure, first the log data need to be moved to persistent storage. One can do this manually by using the Windows Azure Developer Portal, or add code in the application to dump the log data to storage at scheduled intervals. Next, need to have a way to view the log data in Windows Azure storage using tools like Azure Storage Explorer.
Traditionally when running applications on premises, diagnostics data are relatively easy to get since they are stored in local storage and can be accesses and moved easily. Logs and events generated by applications are subscribed, acquired, and stored as long as needed. When moving an application to the cloud, the access to diagnostic data is no longer status quo. Because persisting diagnostic data to Windows Azure storage costs money, IT will need to plan how long to keep the diagnostic data in Windows Azure and how to download it for offline analysis.
The cost of processing diagnostic data is one item in the overall cost model for moving application to public cloud. An application cost analysis can start with those big buckets including: network bandwidth, storage, transactions, cpu, etc. And eventually list out all the cost items and each with organizations responsible for those costs. Compare the ROI with that for on-premises deployment to justify if moving to cloud makes sense. Once agreements are made, each cost item/bucket should have a designated accounting code for tracking the cost. IT professionals will have much to do with the cost analysis since the operational costs of running applications in cloud are compared with both the capital expense and operational costs for on-premises deployment. For example, routine on-premises operations like backup and restore, monitoring, reporting, and troubleshooting are to be revised and integrated with cloud storage management which has both operation and cost implications.
The takeaway is “Don’t go to bed without developing a cost model.”
Some Thoughts on Security
Cloud is not secure? I happen to believe in most scenarios, cloud is actually very secure, and more secure than running datacenter on premises. Cloud security is a topic itself and certainly far beyond the scope of the discussions here. Nevertheless, you will be surprised how fast a cloud security question can be raised and answered, many times without actually answering it. In my experience, many answers of cloud security questions seem surfacing themselves once the context of a security question is correctly set. Keep the concept, separation of responsibilities, vivid whenever you contemplate cloud security. It will bring so much clarity. For SaaS, there is very limited what a subscriber can do and securities at network, system, and application layers are much pre-determined. PaaS like Windows Azure platform, on the other hand, presents many opportunities to place security measures in multiple layer with defense in depth strategy as show below.
Recently some Microsoft datacenters hosting services in the cloud for federal agencies have achieved FISMA certification and received ATO (Authorization to Operate). This is a very proof that cloud operations can meet and exceed the rigorous standard. The story of cloud computing will only get better from here.
Cloud is not secure? Think again.
[To Part 1, 2, 3, 4, 5, 6]
Power of Choices
For enterprise, cloud computing presents tremendous opportunities to re-architect IT for the future. From infrastructure to business model and management, as depicted in the following, IT now has options to redistribute, reorganize, reset priorities which perhaps not feasible in a traditional on-premises only computing model. IT can shift the cost from capital expense to operational, let a 3rd party to run those functions not in the core business, employ cloud to easily scale up or out, etc. And the benefits of having a dedicated cloud are apparent and attractive. And that is the so-called private cloud, a dedicated cloud which can be run on premises or hosted off premises by a 3rd party.
Essence of Cloud
How to deploy and where to deploy a service do require strategic planning and solid fundamentals in application architecture, such that the benefits of cloud computing can be realized in a timely and predictable fashion. There are specific capabilities that cloud computing is expected to provide. Essentially, regardless how cloud computing is deployed or delivered, applications need to engineer in the quality and exhibit the five characteristics of cloud computing, namely:
Else, one can call it whatever one wants. It is however not the cloud computing that IT industry is talking about.
Approaching Private Cloud
Depending on the priorities of a business, the above 5 characteristics may not be all relevant for building private cloud. For instance, perhaps ubiquitous network access is not required since in a private cloud setting IT may not want resources to be that accessible for security reasons. Corporate may not critically need a pay per use model due to the complexity and feasibility to implement chargeback. To approach private cloud, there needs an overall vision to define the goals for the next three to five years. Private cloud is expensive and mechanism to ensure predictable results must be put in place. Enterprise IT needs to enforce hardware/software standards in datacenter and automate/optimize operations and procedures, when possible. Application architect need to develop cloud application design guidelines to ensure application manageability and readiness for running in a mixed (i.e. cloud and on-premises) environment. On top of applications, a common platform to manage physical, virtualized, and cloud resources transparently and in a consistent fashion is strategic and imperative to transition into cloud computing. Without a unified way to manage not only virtual machines, but the workloads; without a single pane of glass to manage the applications within private cloud, but public cloud as well, the cloud transformation will introduce many manageability issues into IT operations, and the process is likely to become divergent, with run-away costs, and eventually non-manageable.
For transforming into cloud computing, there are developed strategies. The above shows a logical progression from an on-premises establishment towards off-premises cloud computing. Along the process, virtualization is taking place and will continue the momentum. (Gartner Symposium ITexpo 2010) This is because for cloud computing to be a reality, virtualization is essential in current technology. Virtualization also implies that a management solution needs to be in place. For cloud to scale rapidly with pooled resources with location transparency, virtual machines and the ability to manage the physical hardware, virtual machines and workloads running within are essential. The management solution will need to know which virtual machines need attentions, how to operate them, where to place them, and what to do with them. Further the ability to monitor and manage the applications/services running within a workload is as critical since it is the applications/services that we care and not the virtual machines themselves. Once virtualization are introduced and mature, enterprise IT can then migrate into an on-premises private cloud before moving into running datacenter in public cloud, if that is the goal. Depending on the business needs, I imagine many IT shops will settle somewhere among virtualized, private cloud, and public cloud. I believe to cloud or not to cloud, that is not the question. The question is really how much and how far.
Specific to private cloud, the above specifies the key milestones, essential capabilities, and recommendations. Each is crucial for contributing a predictable ROI with cost-effectiveness in the long run. Enterprise IT must rethink how to conduct business, develop a vision with a roadmap to move from structured responses to on-demand deliveries. And private cloud is the very opportunity.
Do-It-Yourself Private Cloud
The private cloud solutions from Microsoft is collectively called Hyper-V Cloud which is a set of guidelines and offerings, and not a packaged product. Hyper-V Cloud offerings is mainly for building private cloud of IaaS as of February of 2011. Fast Track is delivered by partners having worked with Microsoft to combine hardware and software offerings based on a reference architecture for building private clouds. There is also a Service Provider Program to identify a qualified service provider to host a dedicated private cloud for you. Above all, there is Hyper-V Deployment Guide for you to build your own private cloud. Yes, you can do it yourself to build a private cloud. In fact regardless which option you plan to acquire your private cloud, I highly recommend going through the Deployment Guide and building a private cloud yourself in a lab environment to get a much better understanding on how to architect, implement, and manage a cloud computing solution.
Getting Ready for the Changes
For IT pros, cloud is a leap from administering server boxes to managing services. The abstraction layer placed by virtualization makes it a lesser concern of the physical hardware in most cloud computing scenarios. This is hard and a cultural shock for most IT pros. With the continual advancement of virtualization technologies, the changing perspective of IT infrastructure is inevitable. One obvious example is that Microsoft System Center Virtual Machine Manager (VMM) Self-Service Portal (SSP) 2.0 introduces a service-centric view of implementing private cloud of IaaS. SSP 2.0 uses (Solution) Infrastructure, Services, and Roles as the building blocks as shown below. A private cloud of IaaS user will then form a service delivery model with a hierarchy where an Infrastructure consists of Services, while each Service includes Roles, and virtual machines are deployed based on the defined Roles.
While integrated with VMM, SSP 2.0 is a free offering from Microsoft that includes a set of web portals, a data store, a lightweight provisioning engine, and documentation and guidance. Within SSP 2.0, a datacenter administrator will first define the resource pools of network, commuting (RAM), and storage (disk space) resources and the cost model of reserving and allocating these resources. There are also predefined templates for deploying virtual machines to be imported from VMM into SSP 2.0 as datacenter’s resources. An authorized business unit administrator will then register one’s business unit followed by making a request for creating a Solution Infrastructure and the included Services and Roles. Once a request is approved by datacenter admin, an authorized user can deploy virtual machines on demand based on approved computing and storage quota. The cost of reserving and deploying an Infrastructure is calculated according to a chargeback model.
The following is a sample solution infrastructure for a Staffing solution. The hiring service is to post job opening, accept resumes, and run through the interview process. Once a candidate is hired as an employee, HR will create a record with Employee Information service and establish employment history and confidential records, while the employee can use the same service to maintain the personal data like home address, phone numbers, etc. The significance of doing this in private cloud is that once the cloud is defined, it is centrally managed with self-serving, on-demand, workflows, and chargeback capabilities. The system is monitored and managed by VMM. On a regular basis, usage IT can now generate reports to conclude the amount to charge back to business units based on their usage.
The provisioning and deploying of a so-called Infrastructure here are quite different than traditional way deploying servers in an on-premises computing environment. Because the deployment is carried out with virtual machines, the computing and storage requirements can be and are provisioned on demand by changing the specifications of a change request. Once approved by datacenter admin via workflow, SSP 2.0 will then allow an authorized user to allocate resources within permitted quota. An authorized user can create/deploy virtual machines on demand as shown in the following. The construction of an “infrastructure” is now much more focused on designing and deploying the “service” with requested capacity and not so much on the involved physical hardware and topology. This allows IT to focus more on enabling business and not constantly running cables and setting up servers. This is called Infrastructure as a Service with private cloud in action.
Personally what gets me most excited about cloud is that all I have discussed are within the reach today. Either to consume, build, or be a cloud, there are so many opportunities to improve IT service deliveries and offer a better experience to users, and grow professionally at the same time. Changes are happening and coming strong. I however see this time they are exciting and for the better. Start immediately. Start now to accept the changes, master the changes, and win all the changes.
So let it be known. I am an IT pro and private cloud was my idea.