The first post was an overview of Cloud terminology, offerings and information about the Cloud world. In this post I’ll drill into two areas, Windows Azure Platform and then good and bad cloud applications.
Windows Azure Platform is a Platform as a Service (PaaS) offering. It provides an organization with a platform for running Windows applications and storing their data in the cloud. These applications could be existing applications used in your organization today that have been converted, or brand new ones written specifically to run on Windows Azure. Developers can create applications for Windows Azure Platform using familiar tools such as Visual Studio 2010.
As a platform, Windows Azure has services that you can consume: Windows Azure, SQL Azure and Windows Azure platform AppFabric. In very simple terms, these equate to an OS, a database and Communication and Security services.
Windows Azure provides three core components, Compute, Storage and the Fabric, along with the Fabric controller. Compute is effectively the Windows operating system, this is an instance. These instances contain a copy of your application. Instances also come in two flavours, a Web Role or a Worker Role. Web roles accept and process HTTP requests using IIS. Not everything you may want to run in Windows Azure is a Web application, so Windows Azure also provides Worker roles. A Worker role instance is quite similar to a Web role instance. The key difference is that a Worker role does not have IIS preconfigured to run in each instance. Web and Worker roles can communicate with each other via technologies like WCF, or using Windows Azure Storage queues. Regardless of role, Web and Worker roles will need to store data. Windows Azure provides three storage options, Blobs, Tables and Queues. Blobs are the easiest to use, they can be very large, up to a terabyte, and can be subdivided into blocks. Another way to use blobs is through Windows Azure XDrives, which can be mounted by a Web role instance or Worker role instance. The underlying storage for an XDrive is a blob. Once a drive is mounted, the instance can read and write file system data that gets stored persistently in a blob.
Blobs are just right for some kinds of data, but they’re too unstructured for many situations. To allow applications to work with data in a more granular way, Windows Azure storage provides tables. Don’t be misled by the name: These aren’t relational tables. In fact, even though they’re called “tables”, the data they contain is actually stored in a set of entities with properties. A table has no defined schema; instead, properties can have various types, such as int, string, Bool, or DateTime. And rather than using SQL, an application can access a table’s data using ADO.NET Data Services or LINQ. A single table can be quite large, with billions of entities holding terabytes of data, and Windows Azure storage can partition it across many servers, if necessary, to improve performance.
The third option in Windows Azure storage, queues, has a somewhat different purpose. We use queues primarily to provide a way for Web role instances to communicate with Worker role instances. For example, a user might submit a request to perform some compute-intensive task via a Web page implemented by a Windows Azure Web role. The Web role instance that receives this request can write a message into a queue describing the work to be done. A Worker role instance that’s waiting on this queue can then read the message and carry out the task it specifies. Any results can be returned via another queue or handled in some other way.
In a nutshell, SQL Azure is SQL Server in the cloud with a few features not currently implemented in the first version. You can have an instance of SQL Azure, with no need to have Windows Azure. Your database applications can access SQL Azure in the same way they access local database copies. As with Windows Azure, you can use the same, familiar tools that you use to manage SQL Server on-premise to access and manage SQL Azure. It really is that simple. The one caveat is there are database size limits with SQL Azure, currently 10GB. There is a plan to increase this to 50GB by June 2010.
Windows Azure platform AppFabric is made up of two components, Service Bus and Access Control. Before going into these two components it’s worth noting that there is also a product called Windows Server platform Appfabric, currently these two “Appfabric” products are different, however they are the same product teams within Microsoft and their product roadmap includes closer synergy over time.
The Service Bus component, Windows Azure platform Appfabric, facilitates communication between applications across the web. Service Bus is the broker in all this, taking requests from clients finding the service they are requesting and passing that request to the endpoint. Service Bus is a cloud service that you can use on its own. EasyJet in Europe does exactly that. How Service Bus works is relatively simple. When you create a Web service in you organization using a technology like WCF, you need to expose the EndPoint so clients can access the service. When those clients are outside your organization you have to figure out how to advertise the service, how to traverse firewalls, how to cope with NAT. Using Service Bus, you first register your Endpoint with Service Bus, it then advertises the Endpoint using a discoverable URI. This enables clients anywhere to find the service. Next, your application has to open a connection to Service Bus; it, in turn, keeps the connection open. Keeping the connection open ensures that Firewalls and NAT are no longer an issue. Service Bus can always send traffic to your application, and because the connection was established from within you network there are no, additional port openings required on the firewall. At no time does Service Bus expose anything specific about your network; Service Bus creates the IP Address used by clients to find endpoints.
Access Control provides a distributed application with identity control. The Access Control service can be used to provide clients with claims information that can be used against a claims-aware application. Access Control issues tokens to a client application that authenticates with it. In order to authenticate with Access Control, the application sends one of three pieces of information:-
Access Control can then create a token using this information and various rules you can configure. The end result of this process is a set of claims the server application can use to determine what this client application can and can’t do.
So what workloads are good candidates for running in the cloud?
The “On and Off” pattern reflects applications that you run on a cyclic basis, web applications that you use for a period of time then turn off, update the site information on and then turn on again at a later date. This pattern frees up servers in your environment that are underutilised for most of their running time, but you cannot add additional workloads because of the intensity of the cycle workload. Here, you are stuck with servers you need to pay for, monitor and maintain and all the while they consume both power and space in your datacentre.
The “Growing Fast” pattern is the one all start-ups aspire too; this pattern would map to an application like Twitter. You provision a server with the application on it and see how it runs. You hope that it takes off and hundreds of thousands of people use it. That sort of pattern inside an organization is a huge challenge to the IT Department. They have to continually review the hardware requirements to keep pace with demand. Within, say the Windows Azure environment, provisioning new images to service clients is roughly twenty minutes away. In your own datacentre, provisioning a new server is days or weeks away.
This sort of pattern is a marketing or sales teams dream; a web application they run suddenly becomes the “in” thing. Everyone wants a piece of it, but for the IT Department it’s another huge challenge. This sudden spike ruins weeks of capacity planning in seconds. The IT Department faces a double-edged sword. They over- specify capacity and servers site idle. They under specify capacity and valued customers are turned away.
This pattern has similar characteristic to the “On and Off” pattern, in this pattern you know in advance when demand will be high. If we think about it we can all probably name at least half a dozen examples, I bet the IRS sites in the US see a spike from February through tax day in April. Ticketmaster is another company that can probably relate directly to this pattern. Any organization that runs a sales promotion can create a predictable burst.
All of these patterns represent capacity in datacentres that is being wasted today. As you think about cloud computing for you organizations, that is one key consideration. In a lot of ways it’s the same consideration you use for virtualization. Utilise one server more by running other server workloads on it.
So what is cloud-friendly and what isn’t? That is the sixty four thousand dollar question. Part of the answer to this question isn’t based on technology. The answer also depends on what business problem you are trying to solve. Windows Azure is designed to run applications, if there are application running on Windows on-premise today, there is a good chance they will run on Windows Azure. However, there are a number of applications or services that are not suitable for Windows Azure, for example Active Directory, DNS, DHCP, NPS, IAS, WDS or TMG. All of these are infrastructure services. They are not what Windows Azure is targeted at. In fact, today, all of the services I mentioned above cannot be installed on Windows Azure. That is by design; Windows Azure is not targeted at these. Future developments of Windows Azure may update this design, but as of the published date of this article that is not the case.
Another consideration when thinking of Cloud servers is legal compliance. For example, take Germany or the UK. These two countries have regulations that state that financial information or government information cannot be stored outside their national boundaries. This means that you need to ensure that whichever cloud vendor you choose has a datacentre in your locale and, even before you start to build a cloud solution, you need to ensure the vendor does not do any geo-replication.
In this post I wanted to provide a more in-depth look at Windows Azure and Cloud applications, in general. In the final part of this series I’ll cover how this could affect the IT Pro.
Sections of this post have come from the Introducing Windows Azure whitepaper.
Videos / Webcasts
Real World Azure: The IT Professional’s Role and Windows Azure
TechNet Radio Microsoft Cloud Services: Windows Azure in Education
IT Manager Community Chat with Kevin Remede: Collaboration Online: Windows Azure
TechNet Edge: Overview of Cloud Computing
TechNet Edge: IT Professional’s Role and Windows Azure
TechNet Edge: Cloud Computing Business Scenarios
TechNet Edge: Cloud Security
TechNet Edge: Cloud Trust at 10,000 feet
White Papers / Datasheets
Introducing Windows Azure
Introducing the Windows Azure platform
Online Services Datasheet
Windows Azure TCO Tool