• Changing the Conversation - Server Virtualization is the Overture, Not the Finale

    I wanted to provide a guest blog post from Brad Anderson, Microsoft Corporate Vice President, in which he share’s the news that Gartner has named Microsoft a leader in their 2011 Magic Quadrant for x86 Server Virtualization Infrastructure*. In his post, Brad covers topics such as private cloud computing and the role that virtualization plays in it, as well as touching on public cloud solutions.

    Please check out Brad’s entire post below and you can visit the following links for more information on Microsoft’s private cloud or Windows Azure public cloud offerings. Thanks and let me know if you have any questions or comments.  Larry

    Gartner just published the 2011 Magic Quadrant for x86 Server Virtualization Infrastructure* and I’m very happy to report that Microsoft is listed among the leaders. Coming on the heels of InfoWorld’s Virtualization Shootout and a Microsoft-commissioned lab test by Enterprise Strategy Group, the Magic Quadrant rounds out a trifecta of independent recognition for Windows Server Hyper-V’s readiness in the enterprise. Added to this, a growing number of customers like Target and Lionbridge are running their businesses on Microsoft’s virtualization technologies.

    What does this mean for you and your business? For one thing, it means the conversation about virtualization has changed for good. Now you can base your decision on value and which partner has the most compelling vision and strategy for the next logical step—private cloud computing. Private clouds provide elasticity, shared hardware, usage-based self-service—plus unique security, control and customization on IT resources dedicated to a single organization.

    Throughout our industry, virtualization has become widely accepted as a means to a bigger end. In order to get the full advantage of cloud-computing you need to have world-class management capabilities that deeply understand the virtualized infrastructure—but more importantly have an in-depth understanding of the applications that are running virtualized. Microsoft’s management solutions provide that insight. System Center 2012 will System Center 2012 will offer the simplest solution to build private clouds at lowest price, using the infrastructure you are already familiar with and integrating seamlessly across the common virtualization platforms. “Concero,” a new capability in System Center 2012, empowers the consumers of cloud-based applications to deploy and manage those apps on private and public cloud infrastructures, helping IT managers deliver greater flexibility and agility to their business teams. 

    But customers don’t have to wait for System Center 2012 to get started with private cloud. Microsoft and its partners—including Dell, Fujitsu, Hitachi, HP, IBM and NEC—already offer a range of private cloud solutions (custom, pre-configured, or hosted) built on top of Windows Server Hyper-V and System Center 2010. These solutions pool hardware, storage, and compute resources so you can deploy applications themselves, quickly and easily. With Microsoft’s private cloud solutions, IT can empower their business groups to deploy applications and ensure those applications perform reliably.

    And our private cloud solution is the only one in the industry that builds a bridge between your existing investments—in both infrastructure and skills—and the public cloud. For many large enterprises, the best solution will be to adopt both public and private clouds, often using them in tandem as a “hybrid cloud.” Microsoft customers will be able to do this seamlessly with a common set of familiar tools—including development, management and identity solutions—that span the entire spectrum, allowing IT to manage their public and private clouds from a single pane of glass and to adapt the mix easily to changing business needs.

    If IT’s primary role is to deliver applications that move the business forward, then an application-centric approach will help you stay focused on what drives business value. It’s this unique combination—private and public clouds built and managed with one set of tools—that enables Microsoft’s customers to focus on the applications rather than the underlying technology. As business needs evolve over time, you maintain control and flexibility over how you create, consume, deploy and manage applications in the cloud. With Microsoft’s comprehensive approach your applications drive the resources, not the other way around.

    Thanks for your time.  Brad

    *The Magic Quadrant is copyrighted 2011 by Gartner, Inc. and is reused with permission. The Magic Quadrant is a graphical representation of a marketplace at and for a specific time period. It depicts Gartner's analysis of how certain vendors measure against criteria for that marketplace, as defined by Gartner. Gartner does not endorse any vendor, product or service depicted in the Magic Quadrant, and does not advise technology users to select only those vendors placed in the "Leaders" quadrant. The Magic Quadrant is intended solely as a research tool, and is not meant to be a specific guide to action. Gartner disclaims all warranties, express or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

  • The Benefits of Private Cloud Computing

    While we've all heard the terms private and public cloud over the last year, those terms may still seem vague to some. So it's probably a good idea to discuss each concept in some detail, and since it's near and dear to my heart I'll start with private cloud. No, it's not a gated community in heaven; though it can be a religious experience when properly implemented.

    To use the formal definition: A private cloud pools and dynamically allocates your IT resources across business units, so that services can be deployed quickly and scaled out to meet business needs whenever they occur. Usage of these resources can be tracked and billed back to each business unit. With private cloud you get many of the benefits of (public) cloud computing with the additional control and customization associated with using resources that are dedicated to your organization.

    What's that all mean? It means that a private cloud takes the concepts of a dynamic datacenter to the next level. In a dynamic datacenter, we use virtualization to - for all intents and purposes - divorce hardware considerations from your IT workloads. The infrastructure you have siloed to different departments, buildings, campuses or what have you, can now be combined into one virtualized pool of resources - infrastructure that IT can offer as a service, quickly and elastically, anywhere in the organization where it's needed. Hence the moniker, Infrastructure as a Service (IaaS). Servers, platforms and applications run on virtualized servers that are quickly deployed and scaled without requiring much integration with the hardware layer. IaaS is currently the beating heart of a private cloud design, but Platform as a Service (PaaS) is coming soon to a private cloud near you (see below).

    The private cloud enables this next-level of IT service, using identity management and advanced systems management tools to enable IT pros and even end users to build up, maintain and tear down resources that before would have required lengthy IT intervention. Take the case of a developer looking to test a new software product. Previously, she'd have to ring up IT and request a server be built to her testing specifications. Wait two weeks for IT to approve the request and someone might then get around to giving her a machine. Meanwhile, her testing process is in limbo. In a private cloud, she'll be able to log into a self-service portal, build her own virtual server decked out just the way she needs it, test till her head turns blue and then tear the whole thing down in the end. To the IT manager, this whole transaction will simply take place in his event and audit logs.

    Does this mean he's out of a job? Heck, no. For one, the elements that comprise a private cloud are the same ones you need him for today - Windows Server 2008 R2, Active Directory, Hyper-V, System Center and more. For another, even with these platforms optimized into a working private cloud, you'll need to align these new capabilities with your company's workflows and business requirements. Yes, the IT pro role will likely need to evolve in this scenario. Grow from being solely a technologist to being able to strategize with technology - add new value to the business by combining technology expertise with business expertise . Find new ways of doing things and push that competitive edge.

    The cloud will evolve, too. From a private cloud foundation, you'll quickly expand to applications and workloads that span across on-premises and off-premises infrastructure. There are several ways to implement this evolution, one of which is the upcoming Windows Azure Platform Appliance, which will allow you to leverage Windows Azure's PaaS power from inside your homey private cloud. Combining on-premises and off-premises cloud resources provides whole new vistas in terms of availability and especially scalability. Managing these resources dynamically and to the best advantage of your business will always require an IT pro role.

    Bottom line: Utilizing a private cloud, whether you build it all locally in your own datacenter or have it built and hosted for you by a third-party, means providing your organization with a dedicated pool of IT resources. That pool will no longer be thought of in terms of the number of servers, but rather in terms of capacity - the number of virtual servers, virtual workloads or applications it can support. Basically, it allows your IT department to deliver infrastructure, platforms and software applications in an easily managed, easily scaled and easily billed service architecture. Even better, that architecture won't be siloed to different departments or divisions, but instead can be managed as a holistic resource across the entire company.

    Hopefully this has made the concept of a private cloud less confusing and more appealing. If you've got any questions, please feel free to comment.

  • Customers Reap Benefits from Comprehensive Cloud Approach

    Take a look at  Brad Anderson’s, Corporate Vice President at Microsoft, perspective on Microsoft’s
    cloud computing strategy, our private cloud solutions and the economics of
    those solutions versus VMware.

     

    Click here to read more.

  • Thought Leaders in the Cloud: Talking with Maarten Balliauw, Technical Consultant at RealDomen and Windows Azure Expert

    Maarten Balliauw is an Azure consultant, blogger, and speaker, as well as an ASP.NET MVP. He works as a technical consultant at RealDomen. Maarten is a project manager and developer for the PHPExcel project on CodePlex, as well as a board member for the Azure User Group of Belgium (AZUG.BE).

    In this interview we discuss:

    • Take off your hosting colored glasses – architecting for the cloud yields benefits far beyond using it as just a mega-host
    • Strange bedfellows – PHP on Azure and why it makes sense
    • Average is the new peak – The cloud lets you pay for average compute, on-premesis makes you pay for peak capacity
    • Datacenter API – It makes sense for cloud providers to expose APIs for many of the same reasons it makes sense for an OS to expose APIs
    • Data on the ServiceBus – Transferring data securely between your datacenter and your public cloud applications

    Robert Duffner: Could you tell us a little bit about yourself to get us started?

    Maarten Balliauw: I work for a company named RealDolmen, based in Belgium. Historically, I've worked with PHP and ASP.NET, and since joining RealDolmen, I have been able to work with both technologies. While my day to day job is mostly in ASP.NET, I do keep track of PHP, and I find it really interesting to provide interconnections between the technologies.

    And Azure is also a great platform to work with, both from an ASP.NET perspective and a PHP perspective. Also, in that area, I'm trying to combine both technologies to get the best results.

    Robert: You recently spoke at REMIX 10, and in your presentation, you talked about when to use the cloud and when not to use the cloud. What's the guidance that you give people?

    Maarten: The big problem with cloud computing at this time is that people are looking at it from a perspective based on an old technology, namely classic hosting. If you look at cloud computing with fresh eyes, you will see that it is really an excellent opportunity, and that no matter what situation you are in, your solution will always be more reliable and a lot cheaper if you do it in the cloud.

    Still, not every existing application can be ported to the cloud at this time. One important metric to use in choosing between cloud and non-cloud deployments is your security requirements. Do you care about keeping your data and applications on premises versus in the cloud?

    Robert: We've obviously engineered Azure to be an open platform that runs a lot more than just .NET, including dynamic languages like Python and PHP that you mentioned, but also languages like Java. But you talk quite a bit about PHP on Azure. From your perspective, why would anyone want to do that when there are so many options for PHP hosting today?

    Maarten: You can ask the same question about ASP.NET hosting. There are so many options to host your .NET applications somewhere: on a dedicated server, on a virtual private server, on a cloud somewhere. So I think the same question applies to PHP, Java, Ruby, and whatever language you're using.

    Azure provides some quite interesting things with regard to PHP that other vendors don't have. For example, the service bus enables you to securely connect between your cloud application and your application that's sitting in your own data center. You can leverage that feature from .NET as well as from PHP. So it's really interesting to have an application sitting in the cloud, calling back to your on premises application, without having to open any firewall at all.

    Robert: In your talk, you also point to the Azure solution accelerators for Memcached, MySQL, MediaWiki, and Tomcat. In your experience, are most people even aware that these kinds of things run on Azure?

    Maarten: I'm not sure, because, traditionally, the Microsoft ecosystem is quite simple. There's Microsoft as a central authority offering services, and then there are other vendors adding additional functionality, bringing some other alternative components. In the PHP world, for example, there is no such thing as a central authority, so information is really distributed across different communities, companies, and user groups.

    I think some part of the message from Azure is already out there in all these communities and all these small technology silos in the PHP world, but not everything has come through. So I'm not sure if everyone is aware that all these things can actually run on Azure, if you're using PHP or MySQL or whatever application that you want to host on Azure.

    Robert: In another talk, you mentioned "Turtle Door to Door Deliveries," and how they estimated needing six dedicated servers for peak load, but because the load is fairly elastic, they saw savings with Azure. Can you talk a little bit more about that example?

    Maarten: That was actually a fictitious door-to-door delivery company, like DHL, UPS, or FedEx, which we created for that talk. They knew the load was going to be around six dedicated servers for peaks, but there were times of the day where only one or two servers would be needed. And when you're using cloud computing, you can actually scale dynamically and, for example, during the office hours have four instances on Azure, and during the evening have six instances, and then in the night scale back to two instances.

    And if you take an average of the number of instances that you're actually hosting, you will see that you're not actually paying for six instances, so that you're only paying for three or, maximum, four. Which means you have the extra capacity of two machines, without paying for it.

    We have done a project kind of like that, for a company in Belgium, doing timing on sports events. Most of these events have a maximum of 1,000 or 2,000 participants, but there are several per year with 30,000 participants.

    We used the same technique for that project as in the example in the talk. We scaled them up to 18 instances during peaks, and at night, for example, we scaled them back to two instances. They actually had the capacity of 18 instances during those peak moments, but on average, they only had to pay for seven instances.

    Robert: One important characteristic of clouds is the ability to control them remotely through an API. Amazon Web Services has an API that lets you control instances, and you recently wrote a blog post showing a PowerShell script that makes an app auto-scale out to Azure when it gets overloaded. What are some of the important use cases for cloud APIs?

    Maarten: If you have a situation where you need features offered by a specific cloud, then you would need those cloud APIs. For example, if you look at the PHP world, there's an initiative, the Simple Cloud API, which is one interface to talk to storage on different clouds, like Amazon, Azure, and Rackspace. They actually provide the common denominator of all these APIs, and you're not getting all the features of the exact cloud that you are using and testing.

    I think the same analogy goes for why you would need cloud APIs. They're just a direct means of interacting with the cloud platform, not only with the computer or the server that you're hosting your application on but, really, a means of communicating with the platform.

    For example, if you look at diagnostics, getting logs and performance counters of your Azure application on a normal server, you would log in through a remote desktop and have a terminal to look at the statistics and how your application is performing and things like that. But if you have a lot of instances running on Azure, it would be difficult to log in to each machine separately.

    So what you can do then is use the diagnostics API, which lets you actually gather all this diagnostic data in one location. You have one entry point into all the diagnostics of your application, whether it's running on one server or 1,000 servers.

    Robert: That's a great example. You also wrote an article titled "Cost Architecting for Windows Azure," talking about the need to architect specifically for the cloud to get all the advantages. Can you talk a little bit about things people should keep in mind when architecting for the cloud?

    Maarten: You need to take costs into account when you're architecting a cloud application. If you take advantage of all the specific billing situations that these platforms have to offer, you can actually reduce costs and get a highly cost effective architecture out there.

    For example, don't host your data in a data center in the northern US and your application in a data center in the southern US, because you will pay for the traffic between both data centers. If your storage is in the same data center as your hosted application, you won't be paying for internal traffic.

    There are a lot of factors that can help you really tune your application. For example, consider the case where you are checking a message queue, and you also have a process processing messages in this queue. Typically in this case, you would poll the queue for new messages, and if no messages are there, you would continue polling until there's a message. Then you process the message, and then you start polling again. You may be polling the queue a few times a second.

    Every time you poll the queue is a transaction. Even though transactions are not that expensive on Windows Azure, if you have a lot of transactions, it can cost you substantial money. Therefore, if you poll less frequently, or with a back off mechanism that causes you to wait a little longer if there are no messages, you can drastically reduce your costs.

    Robert: One of the key challenges for enterprise cloud applications is that they often can't run as a separate island. Can you talk about some of the solutions that exist when identity and data need to be shared between your data center and the cloud?

    Maarten: Actually, the Windows Azure AppFabric component is completely focused on this problem. It offers the service bus, which you can use to have an intermediate party between two applications. Say you have an application in the cloud and another one in your own data center, and you don't want to open a direct firewall connection between both applications. You can actually relay that traffic through the Windows Azure service bus and have both applications make outgoing calls to the service bus, and the service bus will route traffic between both applications.

    You can also integrate authentication with a cloud application. For example, if you have a cloud application where you want your corporate accounts to be able to log in, you can leverage the Access Control service. That will allow you to integrate your cloud application with your Active Directory, through the AppFabric ACS and an Active Directory federation server.

    If you have an application sitting in your own data center but you need a lot of storage, you can actually just host the application in your data center and have a blob storage account on Azure to host your data. So there are a lot of integrations that are possible between both worlds.

    Robert: You manage a few open source projects on CodePlex. Can you talk a little bit about the opportunity for open source on platforms like Azure or those operated by other cloud providers?

    Maarten: Consider, for example, SugarCRM, which is an open source CRM system that you can just download, install, modify, view the codes, and things like that. What they actually did was take that application and also host it in a cloud. They still offer their product as an open source application that you can download and use freely. But they also have a hosted version, which they use to drive some revenue and to be able to pay some contributors.

    They didn't have to invest in a data center. They just said, "Well, we want to host this application," and they hosted it in a cloud somewhere. They had a low entry cost, and as customers start coming, they will just increase capacity.

    Normally, if a lot of customers come, the revenue will also increase. And by doing that, they actually now have a revenue model for their open source project, which is still free if you want to download it and use it on your own server. But by using the hosted service that's hosted somewhere on a cloud, they actually get revenue to pay contributors and to make the product better.

    Robert: Maarten, I appreciate your time. Thanks for talking to the Windows Azure community.

    Maarten: Sure. Thank you.


  • Technical computing in the cloud: Big solutions for big challenges

    The following is a guest blog post from Bill Hilf, General manager, Technical computing, Microsoft

     

    Every day there is a lot of interesting news and discussion around cloud computing.  One area that I find important and exciting is thinking about large scale and complex problems - classically tackled by high performance computing systems - and how cloud computing might change the game for these types of applications and workloads.

    In many ways, the cloud enables the next generation of technical computing, or supercomputing, which harnesses massive computing power and enormous data for advanced modeling and simulation.  From disease research to climate change to crash test simulation to designing alternative energies, technical computing in the cloud offers tremendous potential for understanding some of the biggest challenges we face.

    Moreover, by making vast computing resources broadly accessible and affordable, the cloud can make technical computing possible for a much bigger community of scientists, business people and governments.  Greater scientific experimentation, harnessing the wisdom of crowds, global-scale collaboration and extreme scale computational power and data analysis...simply put, the cloud can democratize technical computing.  That's why the cloud, specifically Windows Azure, is a cornerstone of Microsoft's strategy to bring technical computing to the mainstream.

    At the Supercomputing 2010 conference we made a few announcements in this area.  We've made a bio-science application called NCBI BLAST on Windows Azure available to researchers around the world, meaning many more scientists can use the power of cloud computing to potentially identify new species, improve drug effectiveness, produce biofuels, and more.  We also announced an upcoming capability for our Windows HPC Server that allows customers to "burst" to Windows Azure from their on-premises systems for on-demand scale and capacity.

    Already many customers are using Windows Azure for technical computing.  At our Professional Developer Conference in October Pixar Studios spoke about how it is looking to Azure to improve performance and infrastructure spending for its RenderMan visual effects application.  Oilfield services company Baker Hughes is using Windows Azure to augment its on-premises systems to accelerate its complex drilling simulations, in some cases helping its people do roughly nine months of work in a month. 

    This is just the beginning.  The cloud is going to open up more technical computing possibilities than I can imagine.  The cloud is much more than just IT cost savings.  It offers the opportunity for science and business to approach some of humanity's largest problems, delivered at extreme scale to extreme masses.  Enabling broad access to technical computing resources will be one of the fundamental answers to the question of why we need cloud computing.  The possibilities are exciting and, as I wrote in an internal email a couple of years ago when we were forming this business, imagine "What if?"

    Let the experiments begin.

    Bill Hilf
    General manager, Technical Computing, Microsoft