Windows Azure has now been around long enough to be a mature platform for building services on and as such IT Professionals are being asked to look at the service. Of course IT Pros are looking for slightly different things from a platform than others. We’re concerned with how things work, the up time, the resilience and with monitoring and troubleshooting. Lets take a look at the current state of play with the platform, walking through some of the basics and explaining some concepts.
Windows Azure is built from the ground up to be consumed as a service and as a platform with the intention that you don’t need to over engineer or spend too much time on some of the nitty gritty parts of traditional deployments. For example if I were planning and designing a service 5 or 6 years ago I’d have to think about some basics like patching the OS or building in capacity for a disaster recovery or preproduction. Those concerns have somewhat gone away with Windows Azure; because of it’s utility nature you can have as much capacity as you need, when you need it. We patch the Operating System (for the most part) and have built in resilience.
As a result when you think about your architecture it’s far more simplistic, you only need to think about how you want the service to run – what the final service is like. Also, you won’t always need to incur the cost of running your preproduction and disaster recovery environments, unless you’re using them of course, and because they’re not a sunk cost in the hardware it will probably be more cost effective.
So what are the parts and how do they fit together? First and foremost, you don’t have to use any of the following components together, you can take what you need and just use that.
Within Windows Azure we can broadly provide services by creating an instance of one of three roles. The first - Web role, is an IIS web server that’s normally used for hosting a front-end web application which can be built in ASP.net, PHP and a few other languages. The second is the Worker role which allows you to run any custom code on the server and is often used to provide the back end functionality of a web application. The web role is just like a Windows Server running a custom service or application that you might deploy on premises, however commonly it’s an application that has been developed specifically for Windows Azure.
These two roles are delivered to you when you request them, you don’t have to provide a custom build or hard drive – in fact you can’t – and as such they are very easy to support, quick to provision and provide a stable, you-know-what-you’re-getting approach. All that’s required to use these roles is a few clicks, the provision of a package file containing the custom code to run and a configuration file that describes the architecture of the service you’re running.
The third role type is very interesting and highly flexible, but also more limited. The VM role is a custom VHD that you build on premises using Hyper-V server and Windows Server 2008 R2, and as such you can put whatever you like into that image. It’s perfect for deployments where you need to deliver something complicated, like a special bit of software that has an installer that needs lots of clicks. There are some limitations to this flexibility though, firstly it’s stateless. Stateless means that you loose changes between reboots, every time you reboot your VM role instance the first thing it will do is come out of sysprep. This is actually not just a limitation of the VM role, it does in fact affect all instances. However the only place you’re likely to come across that issue is with a VM role, as with the other roles the developers will have built an application that doesn’t have a life-span. A good example is that for this statelessness reason you can’t host a SharePoint site or a Domain controller in Windows Azure.
The second limitation of the VM role is upload time – typically when you’ve built your VHD it’ll be about 30gb and that can take quite a time to upload. I once left a VHD uploading for 3 days. The good thing is that you can test that it’ll work before you upload it and that Windows Azure will be able to host it. Provisioning time can be longer too with the VM role because there’s more custom stuff to do, so when you want to spin up more instances that’s something to think about.
So we have 3 types of roles, but how many of those roles can you have, surely not just one? That’s right you can have many of instances of a role.
If you were going to think of a parallel between instances and traditional, on-prem deployment models an instance would be a server. Actually each instance is a virtual machine within Windows Azure, some of the resources will be shared by many Virtual Machines on the same physical server and some will be dedicated to a particular Virtual Machine. For example an Extra Small Virtual Machine uses shared CPU (just like most hyper visors do on premises, by default) but the other instances have dedicated CPUs. But what does this mean? It means that, for example, a 16 core physical server could host 2 extra Large instances and or 16 small instances. Of course none of this matters to you because you do not care about the physical hardware – that’s the bit we take care of. The following table shows you the differences between the different sizes of instances.
So we have some instances of some roles that can provide some type of service which is excellent, but the next piece of the puzzle that needs to be completed is storage. Obviously you’ll see from the table above that instances have their own storage, but since instances are stateless where does that leave that storage? The answer is that anything that is saved within the instances internal storage may not persist between reboots because the instances themselves are stateless. To overcome this issue we have three different types of storage that do not form part of an instance or role but are a totally separate entity within the service. If you wanted a parallel to an on-prem type of deployment this could be an area of shared storage like a network drive.
There three types of Windows Azure Storage that are optimised to help you achieve different things. The first is a binary large object or a BLOB. BLOB storage can contain anything you like, it could be pictures, music, data files anything you want to put in there. A clever feature of Windows Azure Storage is Windows Azure Drive which allows a page of BLOB storage to be loaded as a VHD file and therefore allows a VHD to be mounted into a Virtual Machine.
Tables are a far more efficient way to deal with large amounts of data that needs some structure, like a list of names and address for example, however unlike a table as you would find in SQL the tables need not be uniform. For example the first line of a table could contain name and address data, the second could contain the number of fish in a bowl. Obviously that example would make the information less useful but it provides lots of flexibility and the idea of rules can be custom built within an application.
Queues are used for communication and are just what they seem, drop on a message and lift it off. They’re very useful for communication between roles and ensure that messages always get delivered.
A very cool element of Windows Azure Storage that has just been introduced is that BLOBs and tables are now replicated between data centres within a region automatically, so in the result of Data Center failure another copy exists within the region. For those concerned by this for Europe that means between Dublin and Amsterdam. The second very cool element is that the data is replicated three times so should a single disk fail or the infrastructure housing the disk (the rack, power supply, etc.) fail there are other copies available. This level of fault tolerance would be very costly to implement on-prem.
Finally, to help improve the performance of your data delivery with an application living in Windows Azure we provide a Content Delivery Network (CDN) which, when enabled, distributes your data to a data centre local to the users accessing your service. So for example if your service is hosted in Europe and you have CDN enabled and a customer in Singapore access data, then a copy of the data is temporarily located near Singapore. When the time out for the data expires the data is removed from the region. The timeout is obviously controllable.
Windows Azure allows you to integrate your application into your organisations Active Directory using ADFS 2.0. This provides you with a secure way to control access based on the same credentials that facilitate logon to your users computers and to much of your on-prem infrastructure, including file shares and other access users take for granted. Deploying this type of authentication helps people to use applications seamlessly but also helps you manage their access. Providing someone with access to a Windows Azure application can be as simple as making them a member of a group in Active Directory.
In addition you can control access using a variety of web providers like Windows Live ID, Google, Yahoo! and Facebook, which are especially useful for publicly accessible services.
Monitoring and troubleshooting
Now we have all the puzzle pieces in place lets think about how we do some of our traditional IT stuff with those roles.
Lets say you want to understand what’s going on in a Windows Azure service that you’ve deployed – something you’ll likely be asked to do if you’ve got any service management ethos in place – how do we do that? Well the first thing to understand is that the roles are just servers running a modified but familiar OS – Windows Server – and second that instances of roles are stateless and changes don’t persist between reboots, which means nor do logs, troubleshooting information or minor changes.
That changes what you need to do to monitor and troubleshoot somewhat. Firstly logs need to be shipped off site to BLOB storage regularly, but not excessively, because storing too much information will start to cost you some money – this is utility computing after all. So with logs you need to ensure that the roles are configured to save just enough information for your needs.
From within the Windows Azure web portal you can launch RDP sessions to your instances running in Windows Azure which, again, requires some up front configuration and the provision of a security certificate as part of specifying the service. From this RDP session there is much you can do but many things that you can’t. You can see Task Manager, view the event log etc. but you can’t fix something. For example, say you have an errant registry entry which is causing an application problem, you edit it, fix it and all is well. Then the instance is rebooted and reporovisioned and your registry change is lost forever. All changes that you want to make that persist to your web and worker roles need to be made as a change to the application package, for the VM role you can also make the changes required to your VHD and re-upload it.
How do we do some more integrated monitoring then? That’s where the Windows Azure Monitoring Pack for System Center Operations Manager 2007 steps in. With this pack you can monitor your Windows Azure service just as you would anything else within SCOM with the ability to create alerts etc. Of course if you’re building a service that is business critical it’s unlikely that you’ve built a service where every aspect is based in Windows Azure. So with SCOM you can monitor your service end to end, building alerts to notify you if say, the Internet connection that joins up your on-prem database service to the Azure Service has a wobble.
Trying it out
Now that we’ve covered some of the basics you’re probably thinking about trying some of this stuff out. The easiest way is with this Azure Monitoring Evaluation which will give you a System Center Operations Manager and Active Directory environment along with a Windows Azure application to deploy, monitor and play with.
Windows Azure is now incredibly deep so you’ll want to learn more including the depths of how CDN works, how caching works, how PKI enables Windows Azures security model and critically how SQL Azure can allow you to place structured data into the cloud.
Andrew and I often get asked questions about some of the more basic elements of the job and a question that comes up time and time again is what are the best tools to use for doing X. For me I often get asked how you start to plan a migration, to understand what’s out there in your environment and then to move into deploying Windows. We also get asked all manner of questions around managing AD, around System Center, around security and around clever ways to do something. I thought I’d compile a short list of some of our favourites, hopefully you’ll find some nuggets but share your thoughts in the comments.
MAP The Microsoft Assessment and Planning (MAP) Toolkit is an agentless inventory, assessment, and reporting tool that can securely assess IT environments for various platform migrations—including Windows 7, Windows Server 2008 R2, Hyper-V, Windows Azure, and Hyper-V Cloud Fast Track. I find this toolkit to be a fabulous planning resource which is why it’s top of this list, because it came to mind first. It simply looks at your environment and provides you with reports that with some tweaking you can use to support things like a request for funding or just to work out how far through a migration you are. For example it can look at your desktop estate and tell you how many PCs you have that don’t have hardware capable of running Windows 7. Andrew is also a big fan of the MAP.
OEAT Office Environment Assessment Toolkit is a free downloadable executable (.exe) file that scans client computers for add-ins and applications that interact with Microsoft Office 97, Microsoft Office 2000, Microsoft Office XP, Microsoft Office 2003, the 2007 Microsoft Office system, and Microsoft Office 2010. You use OEAT during the assessment phase of your application compatibility and remediation project, which is described in detail in Office 2010 application compatibility guide. The following figure shows how OEAT fits into the overall process of assessing application compatibility
MDT The Microsoft Deployment Toolkit is THE tool to use to get any version of Windows deployed within your organisation. It simplifies the process of creating dynamic deployments that can adapt to the hardware or environment into which they are being delivered. If you already use System Center then it integrates very well and the new Beta integrates with System Center 2012 too. MDT also has a task sequence that lets you automatically P2V an XP machine to migrate it to Windows 7 allowing full access to the original XP machine, all it’s apps and data.
USMT User State Migration Tool (USMT) 4.0 is a scriptable command-line tool that provides a highly-customizable user-profile migration experience for IT professionals. USMT includes two components, ScanState and LoadState, and a set of modifiable .xml files: MigApp.xml, MigUser.xml, and MigDocs.xml. In addition, you can create custom .xml files to support your migration needs. You can also create a Config.xml file to specify files or settings to exclude from the migration.
IEAK The Internet Explorer Administration Kit (IEAK) simplifies the creation, deployment and management of customized Internet Explorer packages. The IEAK can be used to configure the out-of-box Internet Explorer experience or to manage user settings after Internet Explorer deployment. WHAT I LOVE about this tool is that it allows so much control over the browsing environment giving you complete manageability where some other browsers give you about 87 settings that don’t do a whole lot, IE gives you 1500+ to ensure it fits your organisation perfectly.
Security Essentials IF your org has less than 10 PCs then this is the FREE antivirus for you, like wise use it at home. Security Essentials uses the same signatures as ForeFront and has won a slew of awards for being very user friendly. You shouldn’t really need to pay to keep safe.
Sysinternals Books have been written about this set of tools, so powerful they help identify and solve serious security and malware issues. Mark Russinovich and friends have created and docuemented an ultra powerful set of tools, some of my favourites are PSExec (which has saved my life and career on many an occasion) BGinfo which tells me all the details I need to identify a server at a glance from the desktop, and Zoom It, which if you’ve ever seen me demo live you’ll have seen.
RDC Man RDCMan manages multiple remote desktop connections. It is useful for managing server labs where you need regular access to each machine such as automated checkin systems and data centers. It is similar to the built-in MMC Remote Desktops snap-in, but more flexible.
Mouse without borders has been an internal tool at Microsoft for along time. It’s an immensely useful tool if you use multiple PCs, it basically allows you to share a single mouse and keyboard across multiple PCs – sort of like a revers RDP. The great thing is that it works perfectly when you have a few laptops to work on at one time as you can use the monitor from each to provide multiple displays.
Some learning tools
Deployment learning portal is the place to learn how to deploy Windows.
MVA is the place to learn how to use the cloud, and virtualisation and tons and tons of other stuff.
One of the questions that I often get into when talking with folks normally have a dev head on around Windows Azure is why you’d want to be able to monitor a Windows Azure application with System Center and why you wouldn’t just build a bespoke monitoring solution since you’re building an application already. The answer is actually a very obvious one to integrate with existing systems. Anyone with an ounce of IT operational background or ITIL training will tell you that you rightly need to centralise monitoring and ultimately to simplify it. The first obvious reason why you’d do that is to save money but there’s a second reason:
That amazing application probably isn’t running in isolation.
You’ll find that it requires other components like network connectivity (yes even in the cloud) to be in place and available – particularly if you’re running in a hybrid environment. A holistic management solution, such as System Center will help you to achieve an overall view of what’s going on, of all the dependencies on various components of the system.
Interestingly we’ve just moved such a system to Windows Azure, along with the required monitoring and documented the whole process with a case study of moving a Business Critical application to Windows Azure. The article is well worth a read for anyone thinking about moving an application to the cloud but the following stuck out for me as benefits:
Accuracy and timeliness are so important in IT Operations it’s untrue but that second benefit around reusability is also so important. Not only will you have developed a reusable system monitoring Windows Azure applications but you’ll also have reusable skills.
How do you get started? Well you could just download this little package of 2 ready made evaluation VHDs and play with a pre-built Windows Azure application, I have and it really doesn’t take long to get to grips with.
A few weeks ago I was in a Telegraph supplement talking about cloud. Here’s the online version of the article on the Telegraph (UK national newspaper) website.
This one-day online conference features a range of sessions specially tailored for IT Pros and covering the big challenges for the year ahead.
Join Microsoft experts plus professionals from IT departments around the UK to discuss topics such as how we will embrace the influx of consumer devices into the workplace; the new features available through SQL Server Denali; and how Windows Azure can help you make sense of your Cloud offering. I’m really excited by this conference as we’re going to be using some great Microsoft Valued Professionals and some of our UK technical community who are really on the tools to share their experience.
The conference will be presented through LiveMeeting, and throughout the sessions you’ll be able to interact and pose your questions on the topics.
Date & Time – 27 October 2011, 09:00-16:00
Registration & Detailed Agenda - WWE
Further Information – TechNet Blog