Cloud Insights from Brad Anderson, Corporate Vice President, Enterprise Client & Mobility
The IT industry never slows down and IT pros are constantly assessing what’s next and what’s best – not only for their business, but also for their careers.
The virtualization world is going through a big inflection point, and this is leading to a lot of noise and conflicting information in the marketplace. Over the last several years, VMware experts have seen a big benefit to their careers from the widespread adoption of virtualization, but everyone is also looking ahead to what’s next.
Over the last few years, Hyper-V has been going from strength to strength, leading to more and more organizations adopting this platform. As demand for Hyper-V grows, there are substantial career benefits for virtualization professionals who are “bilingual” – i.e. those that are comfortable with both Microsoft and VMware platforms.
With this in mind, I am happy to announce two brand new resources from Microsoft that help bring more IT professionals along on the journey: A new Microsoft Virtualization certification, as well as a brand new website, Virtualization2.
The new certification has been designed around Microsoft Server Virtualization with Windows Server 2012 R2 Hyper-V and System Center 2012 R2 Virtual Machine Manager, and it is geared toward the skills of VMware VCPs. Virtualization2 also has some great, free resources that Microsoft has developed to help IT pros who are VMware experts build their expertise in Microsoft virtualization.
This new certification comes with a new way to prepare for your certification exam. To get certified you can train with Microsoft LPs (like with any of our other certifications), or you can take the free online training with Microsoft technical experts via Microsoft Virtual Academy.
You can sign up for this free training (which will be live and interactive on November 19 & 20) by hosts Symon Perriman and Telmo Sampaio by clicking here.
As a bonus, you’ll receive a voucher to take the new certification exam for free (regular cost is $150 US).
As you check out the site and this certification, I want you to keep in mind that this is not just about virtualization. The virtualization aspect of this is only the tip of a very large iceberg. I believe, looking 2-3 steps ahead, that getting acquainted with Hyper-V is going to future-proof your career and provide you with the broadest possible skill set.
This is a big opportunity for you, your organization, and your career – and we’re excited to help you make the most of it.
Earlier I posted the big news that the Microsoft Remote Desktop app is now available for Android, iOS and Mac OS X.
As of this morning, after just a few days of availability, the Remote Desktop app has been downloaded, 1 million times!
Downloads have been steadily growing, and this is now the fastest growing Android app from Microsoft, ever.
This kind of adoption is exciting; the Remote Desktop App brings the dynamic experience of Windows to devices around the world.
To see what the industry is saying about our Remote Desktop app, check out the links below.
Learn more about Windows Server Remote Desktop Services here.
For the last 2 years my team has been working with a partner from the Bay Area called Cloud Cruiser – and the results of this partnership are going to benefit a lot of enterprises.
We partnered with Cloud Cruiser for a really important reason: The technology developed by this team enables a fully integrated, hybrid cloud-based, financial management solution – something that is a critical need for any organization building its business in the cloud.
Cloud Cruiser has done some pioneering work in the cloud-based financial management category. Their technology encompasses a suite of solutions that enterprises have previously had to build from scratch (with varying degrees of success), or piece together from a variety of different 3rd party apps. These solutions include multi-tenant cloud billing for hosters, showback/chargeback for enterprise, and advanced decision support analytics like customer analysis, demand forecasting, and P&L management.
The Cloud Cruiser team is so excited to share this technology that they’ve decided to deliver it at a very economical price, i.e. free.
I think that this partnership is a big win for IT pros and IT leaders for several reasons.
First, our work with Cloud Cruiser has created an integrated solution for enterprises and hosters that lets them manage both the operational and financial aspects of their cloud from the Windows Azure portal. As IT services become more commoditized, the ability to fully understand these financial elements, and make business decisions based on cost, becomes absolutely critical. With cost information directly available within the Windows Azure portal, our customers have the financial insight they need to make their business even more profitable in the cloud.
This solution is also a no-cost, low-friction entry point for IT pros looking for financial management tools. Cloud Cruiser Express installs automatically within Windows Azure Pack in less than 10 minutes, and the product provides two complementary portal experiences for customers (one for Windows Azure portal and the other for Cloud Cruiser).
The portal experience is designed for quick access to tenant cost information, as well as showback, chargeback, and multi-tenant cloud billing from within the portal. Additional financial management capabilities are accessible through the Cloud Cruiser portal, including a full suite of standard reports, service pricing capabilities, and proactive cost controls – as seen in the video below.
Another great element to consider is the completeness of this solution; Cloud Cruiser Express is comprehensive right out-of-the-box. It offers integrated showback, chargeback, and billing for your Windows Azure environment very quickly, and the solution easily scales as your business grows. It’s a fast and easy way to start seeing ROI from your financial analytics.
Cloud-based financial management is an area that Microsoft takes very seriously, and our work with Cloud Cruiser emphasizes the value we place on this functionality. It should also underscore the confidence your organization can have in hosting these important assets in a secure, cloud-based environment.
To get started with Cloud Cruiser Express, there are two simple steps:
To make this process really simple, the Cloud Cruiser team has created this 3-minute video for detailing the installation process:
Also, for more detail about how to maximize this technology, I recommend checking out the Cloud Cruiser Express for Windows Azure Pack webinar hosted by Cloud Cruiser and Microsoft on Oct. 23 at 10:00am PST.
You can read more about how Cloud Cruiser integrates with Windows Azure Pack or System Center here. That link comes complete with white papers and demo videos – more than enough to get you up to “Expert” status in time for the webinar.
As of yesterday afternoon, the Microsoft Remote Desktop App is available in the Android, iOS, and Mac stores (see screen shots below). There was a time, in the very recent past, when many thought something like this would never happen.
If your company has users who work on iPads, Android, and Windows RT devices, you also likely have a strategy (or at least of point-of-view) for how you will deliver Windows applications to those devices. With the Remote Desktop App and the 2012 R2 platforms made available earlier today, you now have a great solution from Microsoft to deliver Windows applications to your users across all the devices they are using.
As I have written about before, one of the things I am actively encouraging organizations to do is to step back and look at their strategy for delivering applications and protecting data across all of their devices. Today, most enterprises are using different tools for enabling users on PCs, and then they deploy another tool for enabling users on their tablets and smart phones. This kind of overheard and the associated costs are unnecessary – but, even more important (or maybe I should say worse), is that your end-users therefore have different and fragmented experiences as they transition across their various devices. A big part of an IT team’s job must be to radically simplify the experience end users have in accomplishing their work – and users are doing that work across all their devices.
I keep bolding “all” here because I am really trying to make a point: Let’s stop thinking about PCs and devices in a fragmented way. What we are trying to accomplish is pretty straightforward: Enable users to access the apps and data they need to be productive in a way that can ensure the corporate assets are secure. Notice that nowhere in that sentence did I mention devices. We should stop talking about PC Lifecycle management, Mobile Device Management and Mobile Application Management – and instead focus our conversation on how we are enabling users. We need a user-enablement Magic Quadrant!
OK – stepping off my soapbox.
Delivering Windows applications in a server-computing model, through solutions like Remote Desktop Services, is a key requirement in your strategy for application access management. But keep in mind that this is only one of many ways applications can be delivered – and we should consider and account for all of them.
For example, you also have to consider Win32 apps running in a distributed model, modern Windows apps, iOS native apps (side-loaded and deep-linked), Android native apps (side-loaded and deep-linked), SaaS applications, and web applications.
Things have really changed from just 5 years ago when we really only had to worry about Windows apps being delivered to Windows devices.
As you are rethinking your application access strategy, you need solutions that enable you to intelligently manage all these applications types across all the devices your workforce will use.
You should also consider what the Remote Desktop Apps released yesterday are proof of Microsoft’s commitment to enable you to have a single solution to manage all the devices your users will use.
Microsoft describes itself as a “devices and services company.” Let me provide a little more insight into this.
Devices: We will do everything we can to earn your business on Windows devices.
Services: We will light up those Windows devices with the cloud services that we build, and these cloud services will also light-up all (there’s that bold again) your other devices.
The funny thing about cloud services is that they want every device possible to connect to them – we are working to make sure the cloud services that we are building for the enterprise will bring value to all (again!) the devices your users will want to use – whether those are Windows, iOS, or Android.
The RDP clients that we released into the stores yesterday are not v1 apps. Back in June, we acquired IP assets from an organization in Austria (HLW Software Development GMBH) that had been building and delivering RDP clients for a number of years. In fact, there were more than 1 million downloads of their RDP clients from the Apple and Android stores. The team has done an incredible job using them as a base for development of our Remote Desktop App, creating a very simple and compelling experience on iOS, Mac OS X and Android. You should definitely give them a try!
To start using the Microsoft Remote Desktop App for any of these platforms, simply follow these links:
Android Store
iOS Store
Mac Store
Over the last year, we have made some incredible developments with Windows Server 2012 R2, System Center 2012 R2, and the new release of Windows Intune – and I am incredibly excited for these products to reach general availability today!
Leading up to this, our team has worked incredibly hard to share as much information as possible about this new platform and its vast array of capabilities – and I encourage everyone to dive into this material in the What’s New in 2012 R2 series we produced over the summer.
Here are a few sources of information that I recommend checking out – and you will probably notice that each of these pages have been entirely redesigned and repopulated specifically for R2:
You can check out trial and evaluation info for all of these products here. I’m also excited to announce the availability of a pre-built Windows Server 2012 R2 image in the Windows Azure Image gallery. With this functionality you can now provision virtual machines with this new OS in just minutes.
Each of these pages allow you to drill down to find the specific information you need, and also offer product trials so that you can see the best-in-class performance for yourself. The number of new features and functionality are considerable, so take the time to use the resources developed by the Windows Server and System Center teams to examine and evaluate how these products can support the work you’re doing and the things you’re responsible for on a day-to-day basis.
Together, these new products truly deliver on the Cloud OS vision by empowering both enterprises and hosting service providers to create datacenters without boundaries using Hyper-V for high-scale virtualization; high-performance storage at dramatically lower costs; built-in, software-defined networking; and hybrid business continuity. I’ll be writing about all of these topics at greater length in the near future.
Both enterprise and service provider IT teams are going to benefit from additional new offerings that are going to make a big, positive impact in their day-to-day operations and their ability to think and act strategically within their organization.
For example, we’ve developed incredible innovations for storage and software-defined networking (SDN).
For storage, the 2012 R2 products deliver a comprehensive array of technologies across Windows Server, Windows Azure, and System Center that allow customers to totally rethink storage and maximize their investments in both primary storage and data protection. This is one of those places where we have taken what we have learned in our public cloud and have delivered that knowledge in the form of some really game-changing solutions.
With Storage Spaces, you get the performance, reliability and availability that in the past only came with expensive SANs, but is now available on low-cost, industry standard disks. Don’t get me wrong, we love SANs and Windows Server has the most comprehensive support for SANs of any Server OS on the planet – but the cloud is driving innovations that deliver incredible value at a fraction of the cost. Storage is one of these places.
To get specific here, Windows Server 2012 R2 Storage Spaces delivers policy-based tiering using SSD’s and HDD’s to optimize both the cost and performance of virtualized storage. This also delivers incredible IOPS – all on industry-standard, low-cost storage. Plus, if you have not had the chance to look at the de-dup capabilities in VDI, you should make the time! We are seeing a 90% decrease in the storage required to deliver VDI for your end-users, all with a crazy increase in the performance of VDI when running in a de-duped configuration.
Regarding the work we’ve innovating for SDN, this is an area that is particularly exciting, and I’ll be writing about this more in the future. In short, the in-box Software Defined Networking functionality is remarkable. This functionality builds on Hyper-V Network Virtualization in Windows Server 2012 and we are delivering in-box capabilities (Site-to-Site VPN Gateway, for example) that will seamlessly bridge physical and virtual networks for hybrid connectivity, as well as enable flexible workload mobility between enterprises and their service providers, with System Center as the management plane. A very positive sign we’re already seeing in this space is the big increase in ecosystem momentum around the work we’re currently doing.
There are some really impressive new offerings that are driving these kinds of innovations, and based on my discussions with customers and partners, I believe that these (to name just a few) are going to be particularly helpful to IT pros around the world:
Another huge area of innovation in 2012 R2 are the leaps forward we’ve made with unified device management and BYOD. Microsoft’s cloud OS strategy is focused on providing solutions that substantively help organizations tackle a really broad range of IT challenges – and few are quite as pressing and daunting as BYOD.
In the 2012 R2 products, there are a number of end-to-end scenarios enabling BYOD, like allowing users to register their devices with Active Directory so they can gain access to corporate resources on their personal devices. Functionality like this is made possible with features like Workplace Join, Web Application Proxy, AD FS v3, Active Directory.
This is a topic that was written about at great length on this blog during the “What’s New in 2012 R2” series – here and here, specifically.
All of our work on BYOD scenarios is aimed at improving the management solutions that IT teams need to make BYOD a reality for their organization. As BYOD scenarios grow in popularity among businesses, this is an area where Windows 8.1 will really make a positive impact. Windows 8.1 makes managing mobile devices even easier for IT Pros with improved IT controls, remote business data removal from company-connected devices, and workplace join. Additionally, Windows 8.1 includes Open MDM which enables third-party MDM solutions with no additional agent required. Look for the Company Portal (Self Service Portal) for users to be able to provision their applications to all their devices in the Windows, Apple and Android stores.
We also announced last week that we have released Remote Desktop Services (RDS) apps into all these stores as well. We are a devices and services company, and this means that we will do everything we can to earn your business on Windows devices – it also means that we are building our services to be used on all devices. We currently have great support for iOS and Android devices with the investments we have made in the R2 products and Intune. We are super committed to supporting all the platforms your users want to work on.
One final great piece of information is the updated Cloud OS page. Regular readers of this blog know that the Cloud OS strategy is among my favorite topics – and one of the trends that I believe is reshaping IT and the tech industry as a whole.
As your evaluations get underway and IT teams around the world experience these offerings, don’t hesitate to reach out and share your reactions in the comments section or on Twitter.
For additional news and information you can also follow @MSCloud.
Thanks again to our partners and customers who have provided invaluable input and feedback on the R2 wave of products. I think you’re really going to enjoy seeing what this platform can do for your business.
And no GA announcement would be complete without a heartfelt congratulations to the extended team responsible for building these products. The R2 wave products have been meticulously and thoughtfully built by a global organization of truly unmatched talent; the final product is something we can all be very proud to call our own. It’s an honor to be a part of such a team, and it is an honor to bring such a genuinely remarkable product to market.
To read more about all the great things GA’ing today, spend some time (it’s Friday, you deserve it!) reading through the GA-centric posts on the Windows Server Blog, Server-Cloud Blog, System Center Blog, and Windows Intune Blog. These posts are incredibly detailed, incredibly helpful, and they are each written by the same engineers who built the products!
Earlier this morning I participated in a panel at GigaOm’s Mobilize conference. Sitting on the panel with me were Alan Dabbiere the Chairman of AirWatch, and JP Finnell the Head of Mobile Strategy and Innovation for SAP.
You can watch the panel on demand here, and I shot a short recap of my morning at Mobilize that’s included at the bottom of this post.
There was a lot of great discussion before, during, and after the panel. Here’s an overview of my answers to some of the questions raised during the panel and by press throughout the day.
Panel moderator Cormac Foster (Research Director for Mobility at GigaOm) said that one of the big risks to and enterprise is malicious behavior within the workforce and he asked how technology can mitigate this.
As scary as it sounds, this perspective is pretty accurate. I commented that the customer organizations with whom I meet – some of the most forward looking in their user-enablement strategies – follow the principle that their users are resourceful. These organizations understand that they need to have a strategy that delivers solutions to their users that are simple and non-intrusive – and if they do not deliver simple solutions (or no so solution at all), these resourceful users will simply go around IT and use the consumer services they use in their personal lives. These users want to be efficient and effective, and they will follow the path of least resistance to get their jobs done. This is how threats can be unknowingly (and knowingly) introduced within an enterprise.
I think that organizations can do a lot to provide their users with simple, safe, manageable solutions. One example I shared was a personal one: I’m not the only “Brad Anderson” working in the technology industry (here I previously thought I was unique J). There’s was a Brad Anderson at Dell (VP of the Dell Server Division) and another at Best Buy (former CEO). There have been times over the years when mail was sent to me that was meant for one of them. These mails were unintended, of course, but it demonstrates how easily your company’s data can be compromised by accident – to say nothing of what can happen when there is malicious intent at work.
The kind of solutions we need to ensure corporate data is secure in situations like this is to use a Rights Management solution (like what Microsoft delivers) where the security travels with the data. If accessing high-business-impact data required an authentication to Active Directory, accidental data leakage by users would not compromise data.
The next comment from the panel touched on how much technology can really prevent bad user behaviors. Most organizations trust their end users, but they still make the effort to trust but verify. As I describe above, simple solutions that naturally integrate with the way in which users do their work can prevent accidental and innocent mistakes that cause data leaks and negatively impact a business.
If a user has malicious intent, it is a whole different ball game and solutions are needed that not only grant access to apps and data, but also track who accessed what, when and from where in order to discover and identify these kinds of behaviors. This is something that we are working on all across Microsoft. We are delivering more and more integrated solutions across Windows, Windows Phone, System Center, Intune, Office, and Active Directory. We are very proud of the fact that our investment in these end-to-end technologies give customers the comprehensive solutions they need.
We also discussed the growth in application management strategies and app management products. One of the trends I am seeing is that much of the conversation in the industry around devices is moving from managing settings to managing the applications and data that users want to access. This is where I think we are going to see industry vendors begin to really look at their strategies for managing devices in a much more holistic fashion and begin to approach the challenges in the context of, “What is our unified strategy for managing access to apps and data across all of the devices my users are using – PCs, tablets, phones, etc.”
There is lots of discussion about a new category of management called MAM – Mobile Application Management. My belief is that organizations need to think about their application management strategy in end-to-end terms, rather than the fragmented approach that seems to be so common, e.g. one strategy for PCs and a different strategy for mobile devices. Your app management strategy should be all inclusive. Yes, there will likely be different access policies across these devices, but you should have a single strategy that is comprehensive and covers all your devices.
Hopefully this gives you a sense of the questions and discussion at the event. Its been a great day, and I look forward to digging into these and other related topics more here on ITC.
To wrap up our nine-part series on the new 2012 R2 wave of products, I sat down stood up with Rick Claus from The Edge Show to talk about the third pillar of the R2 release: Enable Modern Business Apps.
As always, it was a lot of fun to sit down and talk about the solution-focused nature of 2012 R2 and how our customers can use it to hit the ground running to solve a lot of the common problems facing their organization.
If you have questions about the 2012 R2 series or what’s next for the Windows Server & System Center products, don’t hesitate to get in touch!
Over the past 11 years, the Microsoft Management Summit (MMS) has grown from a small user group event focused on systems management and managing PCs, to a large and passionate gathering of the world’s best and brightest IT Pros.
Now it’s time to look ahead to the next step for our industry and this community.
Starting this year we are merging MMS with TechEd.
MMS was once a small sub-set of content covered at TechEd, but the role of management in key trends like the Consumerization of IT and the move to the cloud have dramatically expanded what IT Professionals need to know. These expanded (and ever-expanding) needs have broadened the requisite subject matter covered at each MMS and, in fact, it has become so broad that the tracks at MMS are now very similar to what’s covered at TechEd. Our belief is that by combining the events we can substantively improve and expand the attendee experience.
Considering how important MMS has become over the years, it is probably obvious that this decision was not made lightly. The permanent merging of these events is not based on a valuation of one event over the other, and I can confirm that it is not a cost-cutting measure. Rather, this change has been made solely with the best interests of our community in mind.
The objective is simple: Provide attendees a better opportunity for knowledge gathering and technical growth. By drawing together a larger community to this combined event, every attendee will have a dramatically wider range of experts and luminaries (both from the industry and within Microsoft) to learn from and engage.
TechEd 2014 will be your single best source for the latest news, trends, resources and deep technical education. Also, the System Center team is already preparing to teach the deep 300- and 400-level content you’ve come to expect at MMS. There will also be expanded Early Bird pricing options, dedicated Management Meet & Geek opportunities, dedicated Management Instructor-Led Labs and Hands-on Labs, structured and unstructured networking opportunities tuned to the Management community and broader communities, and other unique MMS experiences.
This is a big transition, and I understand and respect the concerns that accompany this type of strategic move. The content and operational experts behind both events are already working together to ensure that this new event surpasses what you’ve come to expect from TechEd while delivering everything you require from MMS.
It’s true that TechEd is a massive event, and its scale will accommodate the MMS community in addition to adding great content of its own. I can promise you that MMS content, experts, and community will be alive and well in this new format within TechEd (see Venn diagram below).
The next step is to mark your calendars for May 12-15, 2014 for TechEd 2014 in Houston, Texas. Registration opens on November 5, 2013, and if you register by December 31, 2013, you’ll get the aforementioned early bird pricing.
If you haven’t already, take a minute to sign up for the TechEd Insiders newsletter to get the latest TechEd information.
I’m looking forward to seeing everyone in Houston.
Earlier this week I got to deliver a keynote at Oracle OpenWorld 2013. It was a lot of fun, and pretty exciting to be the first Microsoft exec to talk to this massive and important part of the IT industry.
In case you missed it, you can check out the full keynote here, or the highlights below.
Thanks again to Larry and the Oracle team for the invite!
For years Microsoft and Oracle have helped customers address enterprise technology needs, and during those years I’ve had the opportunity to personally partner with, compete with, and admire Oracle as an industry leader.
Certainly when you think of enterprise scalability and reliability Oracle comes to mind, and I believe the Microsoft clouds also deliver the enterprise-grade reliability, scalability and performance – and at the “cloud economics” price point that so many have come to expect from Microsoft. The combination of the Oracle workloads on Microsoft clouds is incredibly powerful.
I am proud to be a part of the next big step in this partnership – delivering the power of Oracle’s mission-critical software to the millions of customers running Windows and Azure. In June, we announced a new strategic partnership to bring Oracle software to even more Windows customers and partners, and I am excited to share two big announcements:
Building on our longstanding relationship, customers using Java, Oracle WebLogic Server, and Oracle Database will now have the flexibility to run that software on Windows Azure and Windows Hyper-V. Also, ISVs who build and sell applications based on Java and Oracle will have the flexibility and choice to run their applications and services in Windows Azure – enabling the enterprise adoption of cloud-based ISV solutions built with Oracle technologies.
Not only am I excited to share the first big results of this partnership, but, when I take the stage later today, it will be the first time a Microsoft executive has delivered a keynote at Oracle OpenWorld. You can watch it live or on-demand here.
During my remarks, I’m going to touch on two big topics afoot in the technology industry: The cloud OS vision, and how Microsoft and Oracle are working together to bring the power of Oracle’s software to private/public clouds and service providers. I’ll also show how one of our mutual partners, Redknee, is already building innovative new, Oracle-based solutions on Windows Azure.
If you use Oracle software & Windows software, or if you’re excited about how the cloud can transform your business (hopefully both), I encourage you to watch.
Today is the first of many steps that will deliver on the promise of this partnership, and there is more work and more announcements to come. In the meantime, try out the preview, check out the Oracle Self-service Kit, and learn more about running Oracle’s software on Microsoft’s clouds. We’re just getting started.
A few days ago I got to sit down with the guys from TechNet Radio to talk about the third pillar of our “What’s New in 2012 R2” series – Enable Modern Business Apps.
We conclude this three-episode series by discussing why it’s important for IT pros to understand how modern apps function (hint: it’s really important), the nuts and bolts of the Microsoft PaaS, and great stuff about the Windows Azure Pack.
You can check out the first episode here and the second episode here.
Earlier today, Microsoft announced that we are making the current RTM bits for Windows 8.1 and Windows Server 2012 R2 available to TechNet and MSDN subscribers effective immediately.
Visual Studio 2013 Release Candidate is also now available.
We heard your feedback that you needed the RTM bits now. We listened and are making the RTM bits available.
We will continue to validate the Windows Server 2012 R2 software with our partners, and there will be some additional updates to this build prior to GA in October.
I can’t wait to hear your feedback once you get started with the software!
I have a quality weekend reading recommendation for all the IT pros looking for the latest and greatest information on Automation.
One my favorite blogs is Building Clouds, and the automation SME’s who contribute to that site just wrapped up “Automation Month.”
Check out this post to see links to all the topics they’ve explored this month (they refer to it as “Augomation”), as well as their TechNet Gallery contributions. The team published a grand total of 16 Automation-related posts, and that includes the “Next Steps” content they wrote for the “What’s New in 2012 R2” series*.
These posts are a great way to build your Augomation Automation expertise, and it’s a very interesting look at the new features and functionalities in R2 Automation.
I love seeing these kinds of real world, applicable examples of this technology at work, and I encourage you to keep an eye on the Building Clouds Blog for more great solutions as we get closer to GA.
* The Automation team’s content appeared in Week 5 and is titled, “Introduction to Service Management Automation.”
Part 9 of a 9-part series.
A major promise underlying all of the 2012 R2 products is really simple: Consistency.
Consistency in the user experiences, consistency for IT professionals, consistency for developers and consistency across clouds. A major part of delivering this consistency is the Windows Azure Pack (WAP). Last week we discussed how Service Bus enables connections across clouds, and in this post we’ll examine more of the PaaS capabilities built and tested in Azure data centers and now offered for Windows Server. With the WAP, Windows Server 2012 R2, and System Center IT pros can make their data center even more scalable, flexible, and secure.
Throughout the development of this R2 wave, we looked closely at what organizations needed and wanted from the cloud. A major piece of feedback was the desire to build an app once and then have that app live in any data center or cloud. For the first time this kind of functionality is now available. Whether your app is in a private, public, or hosted cloud, the developers and IT Professionals in you organization will have consistency across clouds.
One of the elements that I’m sure will be especially popular is the flexibility and portability of this PaaS. I’ve had countless customers comment that they love the idea of PaaS, but don’t want to be locked-in or restricted to only running it in specific data centers. Now, our customers and partners can build a PaaS app and run it anywhere. This is huge! Over the last two years the market has really began to grasp what PaaS has to offer, and now the benefits (auto-scale, agility, flexibility, etc.) are easily accessible and consistent across the private, hosted and public clouds Microsoft delivers.
This post will spend a lot of time talking about Web Sites for Windows Azure and how this high density web site hosting delivers a level of power, functionality, and consistency that is genuinely next-gen.
Microsoft is literally the only company offering these kinds of capabilities across clouds – and I am proud to say that we are the only ones with a sustained track record of enterprise-grade execution.
With the features added by the WAP, organizations can now take advantage of PaaS without being locked into a cloud. This is, at its core, the embodiment of Microsoft’s commitment to make consistency across clouds a workable, viable reality.
This is genuinely PaaS for the modern web.
Today’s post was written by Bradley Bartz, a Principal Program Manager from Windows Azure. For more information about the technology discussed here, or to see demos of these features in action, check out the “Next Steps” at the bottom of this post.
* * *
Over the past decade, we’ve seen a dramatic shift in how developers build applications. Modern applications frequently reside on the web, and this shift is driving massive demand for scalable, secure, and flexible ways to host web applications across public and private clouds.
Web Sites for Windows Server, included in the Windows Azure Pack, provides a Platform as a Service for modern web applications. It delivers both a powerful self-service platform for developers, while serving as a flexible hosting solution for IT professionals. Web Sites does this in a manner which is highly consistent with the Windows Azure Web Sites service. This allows developers and IT professionals to code against and manage a common set of runtimes, experiences, and capabilities – regardless of deployment to the public or private cloud.
Developers and IT pros alike have often struggled with the complexity of web farm configuration and management. By providing a turnkey solution, Web Sites provides developers with the web application services they expect while simplifying administration for the IT professional.
Today’s web developer could often be referred to as a “polyglot programmer.” He or she develops in many languages, often selecting the language, database, and tools best suited to solving a given problem. Additionally, many developers don’t start with new applications; instead, they customize an existing application to meet their needs. In either case, Web Sites aims to provide the developer with choice, reducing time to market, and increasing efficiency.
In addition to providing a best-in-class environment for creating new applications from scratch, the Web Sites service includes a gallery of applications and templates to accelerate application time to market. Popular open source web applications, including DotNetNuke, Umbraco, WordPress, Drupal, and Joomla are packaged and ready for zero-code deployment to the Web Sites cloud. Furthermore, there are a number of templates for creating new .NET, PHP, Node.js, and Python apps.
A developer can create a web site from a template in a few clicks. We will walk through an example of a developer who creates his/her blog using the WordPress application template. To access the gallery, the developer will click ‘New’ in the Consumer Portal and choose to create a new web site from the gallery.
The gallery displays a list of applications that the service provider enables for tenant use. The developer can select WordPress from this list and provide the configuration settings to deploy the application to his/her new web site.
In a few seconds, a web site is instantiated on a server that also hosts other tenant web sites which is provided by the shared hosting capability of the Web Sites PaaS. The developer can now monitor, configure and scale this newly created web site from the Consumer Portal.
The URL for the newly created WordPress blog is now available on the Dashboard tab. The developer can share this URL of this web site as needed or associate it with a custom domain name.
Out of the box, the Web Sites service provides broad language support, including ASP.net, Classic ASP, Node.js, PHP, and Python. Furthermore, if a developer has a preference for a language not included, he or she can provide a generic FastCGI handler used for running applications in his/her web site. By providing broad language support, which has been battle-tested in Windows Azure Web Sites, the private cloud can now provide a broad menu of language options to satisfy the demands of developers.
Beyond languages and frameworks alone, the Windows Azure Pack also provides SQL Server and MySQL database provisioning as an integral part of the Web Sites provisioning and management experience. Since different languages often hold a database preference, providing database choice through a consistent user interface allows developers to focus on building applications naturally. By delivering these databases as a service, developers can focus on coding rather than database administration.
Additionally, we see that developers often prefer a specific set of tools for development and deployment. For .NET developers, we provide best-in-class support for Visual Studio and WebMatrix. Specifically, Visual Studio users can easily import a Publish Settings file which allows one-click application deployment. WebMatrix users, in addition to one-click deployment, can also edit their site live; it is launched by a button in the Service Management Portal. Deep tool integration makes Web Sites a great place to efficiently host existing ASP.NET and Classic ASP web sites.
Both Visual Studio and WebMatrix utilize the Web Deploy publishing endpoint. For more traditional file upload tools, both FTP and FTP-S are supported.
To demonstrate Visual Studio integration, we’ll begin by creating a web site in the Windows Azure Pack and in Visual Studio. You’ll notice that we’ll be creating a web site the same way you would build an application for deployment to IIS or to Windows Azure Web Sites.
After creating a default page, we will download the publishing profile and configure publishing in Visual Studio.
Next, we will import the downloaded publishing profile.
Finally, we will click “Publish” to deploy our application.
In a few moments, the application build will complete and publishing will commence. After Web Deploy syncs changes between the local and remote deployment, the application will be live.
As DevOps becomes an increasingly common phenomenon, we see developers demanding a greater degree of integration between their source control systems and hosting operations. The Web Sites runtime can host a copy of a Git source code repository, allowing rapid iterative development in the private cloud (including rollback to previous versions of the application). By using standard Git commands, a user can push changes from a local repository into the cloud with no special integration. Consequently, cloning applications across clouds becomes a simple task.
To illustrate this, watch how easy it is to create an application and set up deployment from source control in Windows Azure. In this case, we’ve deployed a basic “Hello World” application written in PHP using Git.
Next, create a web site in the Windows Azure Pack, and set up deployment from source control. From here, the Git tools can be used to clone the site from the public cloud into the private cloud. Once cloned, the application can be redeployed to the private cloud with no code changes. Note the consistency across the Azure UI and how the application behaves identically despite deployment to a private cloud.
Delivering this level of Pass functionality is the result of a large number of new and improved features that include cross platform development, zero lock-in, scalability, site mode, horizontal/vertical scaling, speed, multi-datacenter support – to name a few. We’ll use the rest of this post to examine these features and their application in detail.
By supporting FTP, FTPS, Web Deploy, and Git protocols alongside the Service Management Portal, the Web Sites service allows developers to deploy and manage applications in the Private cloud from any client OS, including MacOS, Linux, and Windows.
Because the Web Sites runtime can be hosted by Microsoft, enterprises, or hosting service providers, developers can confidently deploy their applications. Should a developer need to migrate his or her application to a different cloud, he or she can do so quickly and simply without code changes. For customers looking to outsource hosting of web applications, they can leverage Windows Azure or hosted offerings from other third party hosting service providers.
Web Sites provides a high degree of scalability. By de-affinitizing web applications from a single server, apps can dynamically execute on any server in a given cluster at any point in time. This allows the Web Sites service to rapidly respond to changing operating conditions. In the event of server failure, requests are load balanced and rerouted to a healthy machine in the farm to maintain application availability. Should an application require additional resources, developers or IT pros can quickly and easily allocate additional resources to the web app to preserve SLAs.
The Web Sites service provides two site modes – Shared Mode and Reserved Mode. Shared Mode runs sites from multiple tenants on a single server to maximize density and minimize operational costs. Shared Mode works quite well for development or testing scenarios. Reserved Mode runs sites for a single tenant in a pool of servers to maximize performance and improve isolation. Often, Reserved Mode is used for production applications. Since switching between execution modes is delegated to the developer, he/she can quickly choose the execution mode best suited to his/her application.
Application scaling strategies often follow two patterns – horizontal and vertical. The Web Sites runtime supports both; developers can run multiple instances of their applications in both Shared and Reserved Modes. In addition, inside of the Reserved site mode, developers can opt to scale vertically by choosing between three instance sizes (Small, Medium, and Large). By delivering multiple scaling options, developers and IT professionals alike can select the optimal way to host their applications.
Because the Web Sites runtime focuses on delivering a finished service (web application hosting), application provisioning and management functions are orders of magnitude faster than infrastructure-based services. Since applications exist as configuration metadata and content, creation and publishing activities complete in seconds rather than minutes – all leading to increased productivity and decreased time to market.
As end users become increasingly connected via devices, performance expectations increase. Multinational corporations expect a worldwide web application presence. With multi-cloud support, service administrators can deploy multiple Web Site clouds to different geographies. Since these clouds are consumed via the same Service Management Portal, developers can easily deploy applications around the globe with minimal time and effort.
As web applications are often internet facing, security is a critical design point. In this release, the Web Sites service has been enhanced to deliver a secure application hosting experience through robust SSL support. In addition, the feed-based update mechanism allows service administrators to keep the Web Sites cloud current with the latest updates.
Because the secure transport of information to and from Web Sites is critical, the service provides two varieties of SSL support:
The Web Sites cloud incorporates a large number of first and third party dependencies to deliver turnkey operation. However, initial deployment is only a small portion of the overall service lifecycle. By integrating Microsoft Update with our feed-based provisioning process, Microsoft is able to deploy updates to both Microsoft and non-Microsoft software. By keeping the cloud up to date, Microsoft helps you maintain a secure and highly compatible application hosting environment.
With the 2012 R2 release of Windows Server, System Center, and the Windows Azure Pack, we ensured that great new scenarios light up across our server operating system, management tools, and cloud runtimes. As a result, we have several new features which will provide developers exciting ways to build innovative applications:
Developers always look for ways to improve application performance, especially if optimizations don’t require code changes. For .NET developers, the most frequent performance complaint is the first request to idle applications. We refer to this as the “cold start” problem. With the R2 release, instead of shutting down idle web sites, we page inactive sites. Paging inactive sites involves moving inactive data from memory to disk. This leads to dramatically improved performance by reducing the frequency of cold start events, since the application can quickly be paged back into memory from disk instead of requiring recompilation. We have also optimized the Web Site cloud to improve performance when application cold start is unavoidable.
Running cloud scale services is challenging, and we’ve taken the lessons learned from running Windows Azure and incorporated them into the Windows Azure Pack. First, we’ve built a wholly distributed architecture to improve security and maximize scalability. Next, we’ve simplified visibility into farm operations and server provisioning. Finally, we’ve made it easy to build plans which govern resource consumption within the Web Site cloud.
As you can see in the architecture diagram above, the Web Sites service uses a number of roles to deliver services. Each role serves a specific purpose and these concerns are separated to ensure a high degree of security. The Web Sites roles include:
When building the Web Sites runtime and user interface, we wanted to deliver the same “single pane of glass” for service administrators. We studied the routine activities of cloud admins and realized that there were three primary groups of activities: Server provisioning, cloud health/capacity management, and cloud troubleshooting. To expedite these processes, we created simple ways to complete these tasks within the browser.
To start, we created a unified view of all Web Sites roles which allows service administrators to view three key elements:
By consolidating this information into a single view, IT professionals can quickly determine if sufficient capacity is available or if service health is degraded.
Should the cloud require additional capacity, the administrator can quickly add additional role instances. Web Sites fully automates deployment and configuration of the runtime onto new machines to seamlessly remediate capacity constraints. To illustrate this functionality, we’ll demonstrate how to provision an additional Small Reserved Instance to deliver additional capacity to developers using the Small Reserved Instance tier of service.
First, click the “Add Role” button, and then select the appropriate role type (Web Worker).
Next, provide the server the name of the machine to be added to the cloud. Finally, specify the worker type.
With a machine name and role type, we can now complete the provisioning and configuration process without further intervention.
Service administrators also need to define a service catalog to govern resource consumption and employ chargeback/billing for cloud usage. To facilitate this, the Web Sites service provides a robust experience for defining base packages of services (Plans) and incremental capabilities (Add-ons). With this flexibility, administrators can author a comprehensive catalog to meet the differing SLAs of different developers.
From the service administrator dashboard, we will start by selecting New, then Plan.
Next, the service administrator must provide a name for a plan.
Then, he/she must select which services are included in the plan and determine which cloud for each service is included in the plan.
Now, the administrator can customize the quotas for each service included in the plans. We will edit the quotas for the Web Sites service.
Once this is complete, the administrator can define resource consumption limits for the plan. Typical resource types, such as CPU time, Memory, Bandwidth, and Storage are all present. In addition, specific features can also be enabled or disabled; this includes custom domain, SSL, WebSocket, 64-bit support, etc.
By customizing plans and add-ons, Web Sites gives the service administrator strict control of how service consumers can use the service. This gives tremendous control over the services provided through a service catalog and also governs usage reporting consumed by chargeback and billing solutions.
The Web Site service in the Windows Azure Pack has been built from the ground up to provide developers with a flexible, scalable, and secure platform for hosting modern applications.
As Web Sites is built from the same source code as the Windows Azure service, the Windows Azure Pack provides highly consistent experiences with the Microsoft-operated Windows Azure Web Sites service. For administrators, we have taken knowledge gained from operating our cloud to improve operational efficiency.
Give it a try. The bits and documentation can be found here.
- Brad
As noted in my earlier post about the availability dates for the 2012 R2 wave, we are counting the days until our partners and customers can start using these products. Today I am proud to announce a big milestone: Windows Server 2012 R2 has been released to manufacturing!
This means that we are handing the software over to our hardware partners for them to complete their final system validations; this is the final step before putting the next generation of Windows Server in your hands.
While every release milestone provides ample reason to celebrate (and trust me, there’s going to be a party here in Redmond), we are all particularly excited this time around because we’ve delivered so much in such a short amount of time. The amazing new features in this release cover virtualization, storage, networking, management, access, information protection, and much more.
By any measure, this is a lot more than just one year’s worth of innovation since the release of Windows Server 2012!
As many readers have noticed, this release is being handled a bit differently than in years past. With previous releases, shortly after the RTM Microsoft provided access to software through our MSDN and TechNet subscriptions. Because this release was built and delivered at a much faster pace than past products, and because we want to ensure that you get the very highest quality product, we made the decision to complete the final validation phases prior to distributing the release. It is enormously important to all of us here that you have the best possible experience using R2 to build your private and hybrid cloud infrastructure.
We are all incredibly proud of this release and, on behalf of the Windows Server engineering team, we are honored to share this release with you. The opportunity to deliver such a wide range of powerful, interoperable R2 products is a powerful example of the Common Engineering Criteria that I’ve written about before.
Also of note: The next update to Windows Intune will be available at the time of GA, and we are also on track to deliver System Center 2012 R2.
Thank you to everyone who provided feedback during the preview process – we could not have done it without you!
I can’t wait to share even more on October 18! In the meantime, keep an eye on this blog and Twitter for updates.
Part 8 of a 9-part series.
Don’t let the title fool you – this post is critically important for Developers and IT pros.
The reason I call out this warning up front is that often, when I’m speaking at conferences around the world, as soon as I start to discuss the developer perspective and developer tools, many IT Pros in the room starts playing Angry Birds while they wait for the developer section to be over.
Why is it so important for IT Pros to understand how modern applications are built? The answer is simple: IT Pros are the ones who build and operate the infrastructure that hosts these applications, and, the more you know about how these applications are built, the better you will understand their platform requirements.
That’s the tactical reason. There is also a strategic reason.
If your organization is not already in the process of defining it’s cloud strategy – it soon will be. You need to be a contributor and leader in these conversations. By mastering today’s topics, you can become a part of the conversation and define the long-term solution, rather than someone who is simply reacting to decisions they were not a part of making.
The future of the IT Pro role will require you to know how applications are built for the cloud, as well as the cloud infrastructures where these apps operate, is something every IT Pro needs in order to be a voice in the meetings that will define an organization’s cloud strategy. IT pros are also going to need to know how their team fits in this cloud-centric model, as well as how to proactively drive these discussions.
These R2 posts will get you what you need, and this “Enable Modern Business Apps” pillar will be particularly helpful.
Throughout the posts in this series we have spoken about the importance of consistency across private, hosted and public clouds, and we’ve examined how Microsoft is unique in its vision and execution of delivering consistent clouds. The Windows Azure Pack is a wonderful example of Microsoft innovating in the public cloud and then bringing the benefits of that innovation to your datacenter.
The Windows Azure Pack is – literally speaking – a set of capabilities that we have battle-hardened and proven in our public cloud. These capabilities are now made available for you to enhance your cloud and ensure that “consistency across clouds” that we believe is so important.
A major benefit of the Windows Azure Pack is the ability to build an application once and then deploy and operate it in any Microsoft Cloud – private, hosted or public.
This kind of flexibility means that you can build an application, initially deploy it in your private cloud, and then, if you want to move that app to a Service Provider or Azure in the future, you can do it without having to modify the application. Making tasks like this simple is a major part of our promise around cloud consistency, and it is something only Microsoft (not VMware, not AWS) can deliver.
This ability to migrate an app between these environments means that your apps and your data are never locked in to a single cloud. This allows you to easily adjust as your organization’s needs, regulatory requirements, or any operational conditions change.
A big part of this consistency and connection is the Windows Azure Service Bus which will be a major focus of today’s post.
The Windows Azure Service Bus has been a big part of Windows Azure since 2010. I don’t want to overstate this, but Service Bus has been battle-hardened in Azure for more than 3 years, and now we are delivering it to you to run in your datacenters. To give you a quick idea of how critical Service Bus is for Microsoft, consider this: Service Bus is used in all the billing for Windows Azure, and it is responsible for gathering and posting all the scoring and achievement data to the Halo 4 leaderboards (now that is really, really important – just ask my sons!). It goes without saying that the people in charge of Azure billing and the hardcore gamers are not going to tolerate any latency or downtime getting to their data.
With today’s topic, take the time to really appreciate the app development and app platform functionality in this R2 wave. I think you’ll be really excited about how you can plug into this process and lead your organization.
This post, written by Bradley Bartz (Principal Program Manager from Windows Azure) and Ziv Rafalovich (Senior Program Manager in Windows Azure), will get deep into these new features and the amazing scenarios that the Windows Azure Pack and Windows Azure Service Bus enable. As always in this 2012 R2 series, check out the “Next Steps” at the bottom of this for links to additional information about the topics covered in this post.
When building modern applications, developers want a way to connect multiple tiers of their application as well as a way to consume services offered by third party vendors. Different tiers within modern applications may have different hosting methods, e.g. front-tier components are often written as web applications, while back-end services may be hosted in a virtual machine.
The Windows Server 2012 R2 platform offers multiple hosting alternatives, but there is still a need to enable connectivity between an application’s components and services. One of the common approaches is to exchange messaging using a broker.
Two years ago, we have introduced the Windows Azure Service Bus which provides messaging capabilities in the public cloud. Since its launch we’ve seen multiple scenarios where messaging has been a valuable in addition to the basic back-end/front-end pattern. We’ve seen messaging used to connect clients and devices to a cloud service, as well as scenarios where messaging was used for integration.
Just a year ago we released Service Bus for Windows Server v1.0, which enables a developer to build, test, and run loosely-coupled, message-driven applications in self-managed environments as well as on developer computers.
Now, with the release of Windows Server 2012 R2 (just one year after we released Service Bus 1.0 for Windows Server and introduced the runtime capabilities of brokered messaging on-premises!), we are proud to release the second version of our broker called Service Bus 1.1 for Windows Server.
In this release we have invested in an integrated experience as a part of the Windows Azure Pack, with the goal of bringing a self-service tenant experience that is consistent with the one that currently exists in Windows Azure.
We’ve listened closely to our customers and focused on improving the following 3 core scenarios with the Service Bus 1.1 for Windows Server and the Windows Azure Pack:
To enable a wide variety of messaging scenarios, Service Bus provides message queues and “Publish/Subscribe” topics.
A queue is a message store in which messages are ordered by send date. One or multiple senders can send messages into a queue, and one or multiple receivers can read and remove messages from the queue. Once a receiver has received a message, that message cannot be received by another receiver.
Typically, queues are used for:
A topic can have multiple subscriptions and each subscription behaves like a queue. One or multiple senders can send messages into a topic. From there, each message is copied into each of the subscriptions. If receivers receive from different subscriptions, they each get a copy of the message. The user can define filters which determine which message is copied into which subscription.
Typically, topics are used for
In a tightly coupled system, if any one of the components fails, the whole system fails and creates bad user experiences. Communication between components is usually based on synchronous or asynchronous calls to perform a task.
For example, consider a retailer application performing calls from a store front-end (web application) to back-end services like shipping and tracking. See Figure 1 below.
Figure 1: Tightly coupled application.
The front-end and back-end applications can be loosely coupled by introducing a message queue. The store front-end sends shipping requests to the queue, and, once a request is queued, the store front-end sends an acknowledgement to the user that confirms the order. The shipping service fetches orders from the queue at its own pace. If a spike of orders arrives and the shipping service falls behind, or if the shipping service is temporarily unavailable, the store frontend is still able to process new orders. At the same time the shipping service can still process orders already in the queue in case the store frontend application is down.
Scaling a loosely coupled application (see Figure 2 and 3 below) is simple. The application may consist of multiple instances of the store frontend or the shipping service. The queue enables multiple senders and receivers to send messages into (or receive messages from) the queue. At the same time the queue guarantees that each message will be processed only once. Monitoring the queue length enables you to determine whether you need to scale your application.
Figure 2: Loosely coupled application.
Figure 3: Scaling a loosely coupled application.
In some systems the same order must be processed by multiple, independent services. Besides shipping, we may need each order to be processed by a CRM system. Some of the orders are to be processed by the fraud detection system as well. By replacing the queue with a topic and multiple subscriptions our shipping service will continue to receive a copy of each order while the CRM and fraud detection systems receive additional copies of these orders. Since the fraud detection system wants to inspect only a subset of the orders, a filter is defined on the fraud detection’s subscription (see Figure 4 below). The filtering can be done on any property of the order, such as the purchase price.
Figure 4: Publish-subscribe.
Because applications can interface with Service Bus in different ways, it supports three communication protocols:
Figure 5: Languages and protocols supported by Service Bus.
Traditionally, message-oriented middleware products have used proprietary protocols for communication between client applications and brokers. This means that once you’ve selected a particular vendor’s messaging broker, you must use that vendor’s libraries to connect your client applications to that broker. This results in a "lock-in" to that vendor since porting an application to a different product requires re-coding all the connected applications.
Furthermore, connecting messaging brokers from different vendors is tricky and typically requires application-level bridging to move messages from one system to another, and to translate between their proprietary message formats.
AMQP 1.0 is an efficient, reliable, wire-level messaging protocol that can be used to build robust, cross-platform, messaging applications. The protocol has a simple goal: Define the mechanics of the secure, reliable, and efficient transfer of messages between two parties.
With the release of Service Bus 1.1, we are happy to announce that AMQP 1.0 support is now available in Service Bus.
Adding the support for AMQP 1.0 messaging protocol enables our customers to experience messaging in new ways. One of the key new scenarios enabled in this release is exchanging messages between applications written in multiple platforms running on multiple Operating Systems.
Figure 6 (see below) demonstrates the rich connectivity patterns that are now enabled by Service Bus.
Figure 6: Connectivity scenario involving various programming languages.
Innovation at the infrastructure layer has made it possible for organizations to start acting like cloud vendors by offering subscription-based IT resources to their business groups. In addition, service providers can now offer more advanced cloud services like Platform-as-a-Service (PaaS) and even Software-as-a-service (SaaS).
In this new world of connected services and mobile workforces, deciding on your cloud strategy can impact the entire organization – not just your IT spending structure.
With Service Bus 1.1 for Windows server and the Windows Azure Pack, we adopted a key principle when it comes to someone’s cloud strategy: consistency in the Service Bus tenant experience across multiple clouds.Consistency in the Service Bus tenant experience across multiple clouds.
In other words, the scenarios revolving Brokered Messaging (like loosely coupling with queues and publish-subscribe with topics) are consistent (but not necessarily identical) across the multiple cloud offerings.
Consistency across clouds takes shapes in various aspects of using Service Bus:
Next, we will explain how the Service Bus self-service tenant experience follows the one available in Windows Azure – as well as the consistent experience for the developers’ using Service Bus with their application.
Figure 7: Tenant experience.
When creating a subscription in the Windows Azure Pack and selecting a plan to use, the tenant portal reflects the services available with this subscription. In the case where the plan chosen had a Service Bus cloud enabled with it (see Figure 8 below), the tenant sees Service Bus available in the portal and can then create a Service Bus namespace. Windows Azure Pack subscriptions and plans where already discussed in one of the earlier posts in this series.
Figure 8: Creating a Service Bus namespace.
In order to create messaging entities such as queues and topics, Service Bus requires you to create a namespace first. A service namespace is used for addressing, isolation, and management the underlying messaging entities. All Service Bus messaging entities are created within the scope of a service namespace.
Creating a queue with the self-service portal is easy and straightforward. With the ‘quick create’ function you simply specify the name of the queue as well as the name of the namespace (see Figure 9 below). This quick create experience will create a new namespace in case one is not already selected.
Figure 9: Creating Service Bus queues and topics.
Service Bus 1.1 for Windows Server supports configuring authentication using the following:
Service bus support authorization rules at two levels:
Each authorization rule includes one (or more) permissions, including: Manage, Send, and Listen (see Figure 10 below).
Figure 10: Defining authorization rules for the entities of a namespace.
When creating a messaging entity, applications may send and receive messages. The self-service tenant portal exposes basic monitoring attributes for each entity such as length and last accessed time, which indicate when a message was sent or received (see Figure 11 below).
For more advanced queries you can use the Service Bus API to issue queries like: “Get me all queues which have more than 10 messages.”
Figure 11: Monitoring queues or topics.
Consistency within public and private clouds begins by using the same client SDK when developing applications with Service Bus. In fact, you don’t need to change your application or rebuild it if you wish to switch between environments. In other words, by changing a single entry in a configuration file, you select which broker to use: Private, hosted, or public cloud.
In addition, Service Bus 1.1 for Windows Server supports a local deployment for development purposes only where you install Service Bus locally on a client Operating System using an Express Database.
A Complete End-to-End experience sample can be found here.
As mentioned above, a single configuration entry specifies which broker to use. We call it the Service Bus connection string. When creating a new project in Visual Studio and adding a reference to the Service Bus SDK via the NuGet Package Manager, a new entry is added to your configuration files (app.config or web.config) with an empty connection string.
When deploying the application, you will replace the connection string with the one pointing to your Service Bus cloud, namespace, as well as security settings.
<appSettings>
<!-- Service Bus specific app settings for messaging connections -->
<add key="Microsoft.ServiceBus.ConnectionString" value="Endpoint=sb://[your namespace].servicebus.windows.net;SharedSecretIssuer=owner;SharedSecretValue=[your secret]" />
</appSettings>
While Service Bus offers many advanced messaging features, the simplest scenario is sending and receiving messages.
Sending a message requires the application to connect to Service Bus by using the connection string as well as providing an entity (queue or topic) to send the message to. The following example uses the connection string provided in the configuration file to connect to a topic whose name is specified in a local variable.
// creates a topic client with a provided topic name.
TopicClient topicClient = TopicClient.Create(ServiceBusTopicName);
// creates a simple brokered message and send it over.
topicClient.Send(new BrokeredMessage("Hello World"));
Note that the topic needs to be created prior to using it. This
When sending a message to a queue or topic it remains there waiting to be consumed (received). With Service Bus 1.1 for Windows Server and the Windows Azure Pack, we have simplified the way an application receives messages, and we have introduced an event-driven programing model. In this model, you specify what is expected to be done with messages and the SDK takes care of the rest.
The following code snippet shows the simplest way to consume messages from a Service Bus subscription.
SubscriptionClient subsClient = SubscriptionClient.Create(ServiceBusTopicName, subsName);
subsClient.OnMessage((receivedMessage) =>
{
Console.WriteLine(string.Format("Processing recived Message: Id = {0}, Body = {1}",
receivedMessage.MessageId, receivedMessage.GetBody<string>()));
});
Note that this not a complete sample. For example, it lacks exception handling as well as performance optimization. Still, it demonstrates just how simple it is to consume messages from Service Bus.
With the release of Windows Server 2012 R2 and the Windows Azure Pack, we’ve identified several key deployment scenarios where Service Bus was used:
In addition, customers are also looking for a ‘self-hosted’ broker in the following cases:
In addition to the topologies mentioned above, Service Bus is fully supported by the Windows Azure Infrastructure-as-a-Service (IaaS) including support for SQL Azure.
Next, we will explain how the Service Bus architecture fits into the Windows Azure Pack overall architecture while still enabling a lightweight deployment of the messaging runtime components. We will also describe how the Service Provider experience supports hosted scenarios (either shared or dedicated).
Figure 12: Service Bus architecture.
Some key requirements of the Service Bus architecture include:
With Service Bus 1.1 for Windows Server and the Windows Azure Pack, we introduced the service provider experience.
In previous releases, we supported an administrator with a set of PowerShell CmdLets for configuration and a Management Pack for monitoring. As we began to plan for R2, customer feedback was clear: They wanted a self-service experience for their tenants and they needed a way to manage multiple deployments with different scale and SLA – and they wanted this all under the same portal.
With these needs in mind, we have aligned the service provider experience of Service Bus with the rest of the services enabled in the Windows Azure Pack by providing the following capabilities:
The Service Provider experience, which has already been discussed here, consists of the following three steps (see Figure 13 below):
Figure 13: Service provider experience.
As noted above, you may be automating the deployment of a Service Bus with the Windows Azure Pack automation capabilities. However, in case you are manually installing Service Bus, or even upgrading from an older version, you need to connect your Service Bus deployment with the Service Provider’s portal.
From the portal’s main screen, just click ‘New’ followed by ‘Service Bus Cloud’ to select a Service Bus cloud to connect (see Figure 14 below).
Figure 14: Connecting to an existing Service Bus cloud.
Like all the Windows Azure Pack services, Service Bus exposes two sets of REST APIs to be used by the Windows Azure Pack portals. These APIs are authenticated with two sets of credentials which you need to provide when connecting to a Service Bus cloud.
When connecting a Service Bus cloud to the Service Provider portal (see Figure 15 below), an administrator can list all the clouds, see basic health data from each node, and monitor the messaging databases for health and space (see Figure 16 below).
Figure 15: Connecting to a Service Bus cloud.
Figure 16: Monitoring Service Bus messaging databases.
Like many other cloud services, health monitoring of Service Bus is available with System Center Operations Manager (see Figure 17 below). The Service Bus Management Pack monitors the cloud’s health including status/underlying databases, hosts, and services.
Based on a rich set of monitoring rules and built-in alerts, with System Center Operations Manager an administrator is able to detect service related issues and their root cause.
Figure 17: Monitoring Service Bus in System Center.
Last week I got to sit down with the guys from TechNet Radio and talk about the 2nd big pillar in the What’s New in 2012 R2 series: Transform the Datacenter.
In this episode – which is a follow up to our earlier (and very popular) discussion about PCIT – we get deep on how this R2 wave of products can use existing IT investments and create something really powerful by leveraging the power of a hybrid cloud environment.
Now that we’ve wrapped up the “Transform the Datacenter” pillar in the ongoing What’s New in 2012 R2 blog series, I sat down (well, we stand the whole time) for another conversation with Rick Claus, host of The Edge Show.
In this discussion, Rick and I talk about some of the ‘behind the scenes’ elements of this R2 pillar, as well as some of my favorite topics from the last four weeks.
You can read all the “Transform the Datacenter” posts here (great topics on IaaS, open source, hybrid identity, DR, etc.).
If you have questions about the 2012 R2 series, don’t hesitate to get in touch!
Part 7 of a 9-part series. Today’s post is the 2nd of two sections; to read the first half, click here.
As an industry we have worked on disaster recovery (DR) and high availability (HA) solutions for – well, it seems like forever. There has been a lot of good work done in this space, but one thing that has always stood out to me has been the fact that, for any enterprise to truly establish a solid DR solution, the costs have been incredibly high. So high, in fact, that these costs have been totally prohibitive; you could argue that building a complete DR solution is a luxury reserved for only the largest organizations.
One thing that I admired about Microsoft before joining the company – and something that I have come to appreciate even more since working here – is the relentless effort we make to simplify challenges and problems, to make the solutions approachable for all, and to deliver the solutions at an economical price.
With Windows Server 2012 R2, with Hyper-V Replica, and with System Center 2012 R2 we have delivered a DR solution for the masses.
This DR solution is a perfect example of how the cloud changes everything.
Because Windows Azure offers a global, highly available cloud platform with an application architecture that takes full advantage of the HA capabilities – you can build an app on Azure that will be available anytime and anywhere. This kind of functionality is why we made the decision to build the control plane or administrative console for our DR solution on Azure. The control plane and all the meta-data required to perform a test, planned, or unplanned recovery will always be available. This means you don’t have to make the huge investments that have been required in the past to build a highly-available platform to host your DR solution – Azure automatically provides this.
(Let me make a plug here that you should be looking to Azure for all the new application you are going to build – and we’ll start covering this specific topic in next week’s R2 post.)
With this R2 wave of products, organizations of all sizes and maturity, anywhere in the world, can now benefit from a simple and cost-effective DR solution.
There’s also another other thing that I am really proud of here: Like most organizations, we regularly benchmark ourselves against our competition. We use a variety of metrics, like: ‘Are we easier to deploy and operate?’ and ‘Are we delivering more value and doing it a lower price?’ Measurements like these have provided a really clear answer: Our competitors are not even in the same ballpark when it comes to DR.
During the development of R2, I watched a side-by-side comparison of what was required to setup DR for 500 VMs with our solution compared to a competitive offering, and the contrast was staggering. The difference in simplicity and the total amount of time required to set everything up was dramatic. In a DR scenario, one interesting unit of measurement is total mouse clicks. It’s easy to get carried away with counting clicks (hey, we’re engineers after all!), but, in the side-by-side comparison, the difference was 10’s of mouse clicks compared to 100’s. It is literally a difference of minutes vs. days.
You can read some additional perspectives I’ve shared on DR here.
In yesterday’s post we looked at the new hybrid networking functionality in R2 (if you haven’t seen it yet, it is a must-read), and in this post Vijay Tewari (Principal Program Manager for Windows Server & System Center) goes deep into the architecture of this DR solution, as well this solution’s deployment and operating principles.
As always in this 2012 R2 series, check out the “Next Steps” at the bottom of this post for links to a variety of engineering content with hyper-technical overviews of the concepts examined in this post.
Disaster Recovery (DR) solutions have historically been expensive and complex to deploy. They require extensive configuration, and they do not work at cloud scale. Since current solutions fall short of addressing many DR needs due to their cost and complexity, workloads in need of protection are left vulnerable, and this exposes businesses to a range of compliance and audit violations. Moreover, because so many of these solutions are built using components from multiple vendors, any attempt to provide simplicity through a single pane of management is nearly impossible.
To address this need, Microsoft embarked on a journey to resolve these issues by delivering Hyper-V Replica (HVR) as a standard feature in Windows Server 2012. From the moment it entered the market, top reviewers and customers saw value in HVR and consistently ranked it amongst ‘the Top 10 features’ of the entire release.
In the R2 release we are following up on this success by providing a cloud-integrated solution for DR. This solution builds on some key enhancements in Windows Server 2012 R2 HVR around variable replication frequency, support for near-sync, and extended replication. From a management standpoint, we provide a DR solution via Windows Azure Hyper-V Recovery Manager (HRM), that is integrated with System Center Virtual Machine Manager (VMM). HRM is currently in limited preview and will be broadly available in the near future.
Our singular focus with HRM is democratizing Disaster Recovery by making it available for everybody, everywhere. HRM builds on the world-class assets of Windows Server, System Center, and Windows Azure and it is delivered via the Windows Azure Management Portal. Coupled with Windows Azure Backup (WAB), which offers data protection or backup, HRM completes the “Recovery Services” offering in Windows Azure. To know more about WAB, read this blog.
The key scenario that we want to address is to providing a simple, easily deployed and simple to operate DR solution that provides
There are five key tenets for the DR solution:
DR software is itself susceptible to the disasters that can hit the datacenters. We wanted to ensure that the core capability that provides DR is itself highly available, resilient and not subject to failure for the same reasons the customer’s workloads are failing. With these facts in mind, the control plane of our solution (HRM) is delivered as a cloud service we call DRaaS (Disaster Recovery as a Service). HRM has been built as a highly available service running on Windows Azure.
From an operations perspective, independent of which site you are administering, the recovery actions need to be taken only in a single place, i.e. the HRM portal. Since the metadata required for orchestration resides in the cloud, you are insured against losing the critical DR orchestration instructions even if your primary site is impacted, thereby addressing the common mistake wherein the monitoring system monitors itself.
By making the DR plans securely accessible everywhere, HRM drastically reduces the amount of lost time currently suffered by the business when initiating a disaster recovery. A customer DR plan may involve multiple sites. HRM manages multiple sites, as well as complex inter-site relationships, thereby enabling a customer to create a comprehensive DR plan. On the deployment side, HRM requires only one provider to be installed per VMM server (a single VMM server can manage 1,000 virtual hosts), thereby addressing the ever-present issue of complexity (the single most important blocker of DR deployments today).
Built on top of Virtual Machine Manager (VMM), HRM leverages existing investments that your fabric administrators have made in topology and management configurations. By leveraging your existing datacenter intelligence, HRM ensures there is no reason to worry about supporting/creating redundant configurations or managing multiple tools. Via deep VMM integration, HRM monitors environment changes in the datacenter and reacts appropriately to them.
HRM works at the logical abstractions of VMM, making it a truly cloud-scale solution. Built on cloud point design from a ground-up, the solution is elastic and can easily scale to deployments of large scale.
HRM is available 24x7 because, frankly, you don’t want to invest in a DR solution for your DR solution. The reality is that customers will have different clouds as targets – private, hosted or public clouds. The HRM solution is designed to deliver consistent experience across all clouds, while reducing the cost of re-training personnel for each new deployment/topology.
With its scripting support, HRM orchestrates the failover of applications even in scenarios where different tiers are protected by heterogeneous replication technologies.
For example, a typical app that you could protect with HRM can have a virtualized front-end paired with a backed in SQL protected via SQL AlwaysOn. Capabilities within HRM orchestration can seamlessly failover such applications with a single click.
The HRM solution is engineered to deliver an extensible architecture such that it can enable other scenarios in the future, as well as ensure that partners can enrich, extend, or augment those scenarios (e.g. newer storage types, additional recovery objectives, etc.). With increasing deployment complexity and with systemic disasters becoming increasingly commonplace, DR solutions must keep pace. The service approach ensures that we can get these innovations out to customers faster than ever before.
Given these benefits, you should be able to roll-out the solution and protect your first virtual machine in hours!
The following sections analyze how these tenets are delivered.
The HRM service is hosted in Windows Azure and orchestrates between private clouds built on the Microsoft cloud stack of Windows Server and System Center. HRM supports Windows Server 2012/Windows Server 2012 R2 and System Center 2012 SP1, System Center 2012 2012 R2 VMM.
The diagram below captures the high level architecture of the service. The service itself is in Windows Azure and the provider installed on the VMM servers sends the metadata of the private clouds to the service, which then uses it to orchestrate the protection and recovery of the assets in the private cloud. This architecture ensures the DR software is protected against disasters.
Since a key goal of this DR solution is extensibility, the replication channel that sends the data is highly flexible. Today, the channel supports Hyper-V Replica.
Within this architecture, it is important to note the considerable investments made towards security:
The goal of the HRM solution is to have your workloads protected in a couple of hours. The “How To” steps for doing this are here. Let’s examine how to plan, roll out and use HRM.
Before you begin setting up HRM, you should first plan your deployment from a capacity, topology and security perspective. In the early configure phase, you should map the resources of your primary and secondary sites such that during failover the secondary site provides the resources needed for business continuity. After that, you should protect Virtual Machines and then create orchestration units for failover. Finally you test and fail them over.
Before we look at these phases, here are a couple notes to consider
With these things in mind, here is how to plan your HRM deployment:
To deploy the solution, you need to download a single provider and run through a small wizard on the VMM Servers in NewYork and Chicago. The provider is downloaded from the Windows Azure portal from the Recovery Services tab.
The service can then orchestrate DR for workloads hosted by Hyper-V on the primary site. The provider, which runs in the context of VMM, communicates with HRM service. No additional agents, no software and no complex configurations are required.
To map the resources for compute and memory, you configure the “Protected Items”, which represent the logical clouds of VMMs. For example, to protect the Gold cloud in VMM-NewYork by the Gold-Recovery of VMM-Chicago, you choose values for simple configurations like replication frequency. You need to ensure that the capacity of the Gold-Recovery cloud will meet the DR requirements of virtual machines protected in the Gold cloud.
Once this is done, the system takes over and does the heavy lifting. It configures all the hosts in both the Gold and Gold-Recovery clouds with the required certificates and firewall rules - and it configures the Hyper-V Replica settings for both clusters and stand-alone hosts.
The diagram below shows the same process with hosts configured.
Note that the clouds, which are the resources for compute and memory, are shown in the tab “Protected Items” in the portal.
Once you have the cloud configured for protection, the next task is network mapping. As a part of that initial VMM deployment you have already created the networks on the primary and the corresponding networks on the recovery, now you map these networks on the service.
This mapping is used in multiple ways:
Network mapping works for the entire gamut of networks – VLANs or Hyper-V Network Virtualization. It even works for heterogeneous deployments, wherein the networks on primary and recovery sites are of different types.
The diagram below shows the tenant networks of the multi-tenanted Gold cloud as mapped to the tenant networks of the multi-tenanted Gold-Recovery cloud – the replica virtual machines are attached to the corresponding networks due to this mapping. For example, the replica virtual machine of Marketing is attached to Network Marketing Recovery since (a) the primary virtual machine is connected to Network Marketing and (b) Network Marketing in turn is mapped to Network Marketing Recovery.
For automated recovery, there are Recovery Plans, which is the orchestration construct that supports dependency groups and custom actions. Today, organizations create documents detailing their disaster recovery steps. These documents are cumbersome to maintain and even if someone made the effort to keep these documents up-to-date, they were prone to the risk of human errors by the staff hired to execute these plans.
RPs are the manifestation of our goal of ‘simplify orchestration’. With RPs, data is presented in the recovery plan view to help customer’s compliance/audit requirements. For example, in a quick glance customers can identify the last test failover of a plan or how long ago they did a planned failover of a recovery plan.
Recovery plans work uniformly for all the 3 types of failovers, namely Test Failover, Planned Failover, and Unplanned Failover. In a recovery plan, all virtual machines in a group fail over together thereby improving RTO. Across groups, the failover is in a sequence thereby preserving dependencies - Group1 followed by Group2, and so on.
In recovery, there is a continuum based on customer needs and expertise (as illustrated in the timeline below).
Organizations use DR drills, known as Test Failovers in the solution, for various purposes, such as compliance reasons, training staff around roles via simulated runs, verification of the system for patching, etc.
HRM leverages HVR and the networking support in VMM to deliver simplified DR drills. In most DR solutions, drills either impact production workloads or the protection of the workloads which makes it hard to carry out testing. HRM impacts neither, making regular testing a possibility.
To increase the quality of testing in an automated manner, and help customers focus on truly testing the app, a lot of background tasks are taken care of by the system. The DR Drill creates the required Test Virtual Machines in the right mode.
From the network perspective, the following options are available to run test failovers:
Once you signaled that DR drills are completed, the system cleans up whatever it has created.
In the following cases, you need planned failover. Some compliance requirements for organizations mandate the failover of workloads twice-a-year to the recovery site and then running it there for a week. A typical scenario is if the primary site requires some maintenance that prevents applications from being run on that site.
To help customers fulfill these scenarios, HRM has first-class support for planned failover through its orchestration tool, RP. As part of PFO, the Virtual Machines are shut-down, the last changes sent over to ensure zero data loss, and then virtual machines are brought up in order on the recovery site.
After a PFO you can re-protect the virtual machine via reverse replication, in which case the replication goes in the other direction – from VMM-Chicago to VMM-NewYork. Once the maintenance is completed or the compliance objectives are met, you do a PFO in a symmetric manner to get back to VMM-NewYork. Failback is single click gesture which executes planned failover in the reverse direction.
Much like insurance, we hope a customer never needs to use this! But, in eventualities such as natural disasters, this ensures that designated applications can continue to function. In the event of unplanned failovers, HRM attempts to shut down the primary machines in case some of the virtual machines are still running when the disaster strikes. It then automates their recovery on the secondary site as per the RP.
Despite all preparations, during a disaster things can go wrong. Therefore, an unplanned failover might not succeed in the first attempt and thus require more iteration. In the solution, you can re-run the same job and it will pick up from the last task that completed successfully.
Like in PFO, failback after unplanned failover is a single click gesture to execute planed failover in the reverse direction. This brings about a symmetric behavior in these operations.
Topologies evolve in an organization with growth, deployment complexities, changes in environments, etc. Unlike other solutions, HRM provides the ability to administer various different topologies in a uniform manner. Examples of this include an active-active wherein the NewYork site provides protection for Chicago and vice-versa, many sites being protected by one, complex relationships, or multiple branch offices going to a single head office. This enables capacity from secondary sites to be utilized and not reserved while not running the workloads from the primary site.
With DR solutions comes a strong need for a simple and scalable monitoring. HRM delivers on this by providing the right view for the right jobs. For example, when a user takes an action on the HRM portal to setup infrastructure, HRM recognizes that he would be monitoring the portal to ensure the action succeeded. Therefore a rich view is present on the portal in the “Jobs” tab to help this user monitor in-progress jobs and take action on those that are waiting for action. On the other hand, when the user is not proactively monitoring the portal and the system needs to draw attention, this is provided through integration in the Operations Manager (SCOM).
The table below captures this continuum and shows how all the required monitoring needs of the users are met.
Since all DR tasks are long-standing, workflows are created by the system. We have built a rich jobs framework that helps you monitor the jobs, query for previous jobs, and export the jobs. You can export the jobs to keep an audit book of the same, as well as track the RTO of the recovery plans to help, the jobs report task-level time granularity.
First, a highly available, or HA VMM, is a recommended deployment for VMM to protect against downtime due to maintenance/patching of VMM both on the primary and secondary site. HRM works seamlessly in this case: You can failover the VMM service from one VMM server to another and the management and orchestration will fail over seamlessly.
Second, there are scenarios wherein users use one VMM to manage both the sites – primary and secondary. An example of this is when one admin is managing both the sites (and these sites happen to be close by to each other), therefore wanting to see the virtual machines of both sites in one console. HRM supports a single VMM scenario and enables pairing of clouds administered by the same VMM Server.
Note that in the event of a complete failure of the primary site, a customer has to recover the VMM server itself on the secondary site before proceeding. Once VMM is up on the secondary site, the rest of the workloads can be failed over.
Microsoft’s DR solution addresses a key gap in the disaster recovery market around simplification and scale. Delivered as a service in Azure, HRM is designed to enable protection for many workloads that are currently lacking protection, thereby improving business continuity for organizations.
A solution like this is a game-changer for organizations of any size, and it is an incredibly exciting thing for the Windows Server, System Center, and Windows Azure teams to deliver.
To learn more about the topics covered in this post, check out the following articles:
Part 7 of a 9-part series. Today’s post is the first of two sections; to read the second half, click here.
One of the foundational requirements we called out in the 2012 R2 vision document was our promise to help you transform the datacenter. A core part of delivering on that promise is enabling Hybrid IT.
By focusing on Hybrid IT we were specifically calling out the fact that almost every customer we interacted with during our planning process believed that in the future they would be using capacity from multiple clouds. That may take the form of multiple private clouds an organization had stood up, or utilizing cloud capacity from a service provider or a public cloud like Azure, or using SaaS solutions running from the public cloud.
We assumed Hybrid IT would really be the norm going forward, so we challenged ourselves to really understand and simplify the challenges associated with configuring and operating in a multi-cloud environment. Certainly one of the biggest challenges associated with operating in a hybrid cloud environment is associated with the network – everything from setting up the secure connection between clouds, to ensuring you could use your IP addresses (BYOIP) in the hosted and public clouds you chose to use.
The setup, configuration and operation of a hybrid IT environment is, by its very nature incredibly complex – and we have poured hundreds of thousands of hours into the development of R2 to solve this industry-wide problem.
With the R2 wave of products – specifically Windows Server 2012 R2 and System Center 2012 R2 – enterprises can now benefit from the highly-available and secure connection that enables the friction-free movement of VMs across those clouds. If you want or need to move a VM or application between clouds, the transition is seamless and the data is secure while it moves.
The functionality and scalability of our support for hybrid IT deployments has not been easy to build, and each feature has been methodically tested and refined in our own datacenters. For example, consider that within Azure there are over 50,000 network changes every day, and every single one of them is fully automated. If even 1/10 of 1% of those changes had to be done manually, it would require a small army of people working constantly to implement and then troubleshoot the human errors. With R2, the success of processes like these, and our learnings from Azure, come in the box.
Whether you’re a service provider or working in the IT department of an enterprise (which, in a sense, is like being a service provider to your company’s workforce), these hybrid networking features are going to remove a wide range of manual tasks, and allow you to focus on scaling, expanding and improving your infrastructure.
In this post, Vijay Tewari (Principle Program Manager for Windows Server & System Center) and Bala Rajagopalan (Principle Program Manager for Windows Server & System Center), provide a detailed overview of 2012 R2’s hybrid networking features, as well as solutions for common scenarios like enabling customers to create extended networks spanning clouds, and enabling access to virtualized networks.
Don’t forget to take a look at the “Next Steps” section at the bottom of this post, and check back tomorrow for the second half of this week’s hybrid IT content which will examine the topic of Disaster Recovery.
Hybrid networking refers to capabilities that seamlessly extend an enterprise’s on-premises network to the service provider’s (hosted or Azure) cloud, and many teams across Microsoft have come together to deliver these end-to-end experiences and scenarios for our customers.
Hybrid networking enables enterprises to easily move their VMs (and virtualized workloads) across clouds while maintaining IP addresses and other networking policies. With hybrid networking, an enterprise administrator can treat a composite network spanning on-premises cloud boundaries as a single extended network.
As enterprises continue to utilize more and more capacity from the cloud, service providers face a pressing issue: How does one simplify the to-the-cloud migration of an increasing number of tenants while minimizing their capital and operational expenses?
Hybrid networking impacts both the ease of workload migration as well as the cost to the service provider. To maximize this benefit, Windows Server 2012 R2 and Systems Center 2012 R2 have been built to deliver cloud-optimized server and management capabilities that enable service providers to deliver efficient hybrid networks that easily onboard tenants.
These capabilities broadly fall under three specific areas we’ll examine today:
This post will also cover 3 key scenarios that have been designed into the 2012 R2 wave of products.
Windows Server 2012 included the cross-premises gateway feature, which allowed enterprise sites to be connected to each other or to the cloud using standard VPN protocols. In Windows Server 2012 R2, this feature is extended into a full-fledged site-to-site (S2S) VPN gateway to support cloud service provider and enterprise customer requirements (see Figure 1 below).
These customer requirements include:
Figure 1
With the 2012 R2 release, we have further refined the Hyper-V Extensible Switch and Hyper-V Network Virtualization which were both introduced in Windows Server 2012.
The Hyper-V Extensible Switch now allows third party extensions to act on packets before and after HNV encapsulation for a richer set of policy and filtering implementations. Additionally, support for hybrid forwarding is available – this allows different forwarding agents to handle packets based on type, e.g. HNV or non-HNV encapsulated.
The impact of this is big: Multiple network virtualization solutions (one provided by HNV, and another from the forwarding switch extension) can co-exist on the same Hyper-V host. Regardless of which agent performs the forwarding computation, the forwarding extension can apply additional policies to the packet. More details on the 2012 R2 Hyper-V Extensible Switch features can be found here.
HNV also now supports dynamic IP address learning, an in-box S2S gateway (covered in this article), integration with NIC teaming for load balancing, high availability, and NVGRE task offload support from NIC vendors. More info on these elements can be found in this blog article.
In addition to the features noted above, Virtual Receive-Side Scaling (vRSS) in the R2 release enables traffic-intensive workloads to scale up to optimally utilize high speed NICs (10G) even from a VM. In addition to this, in an effort to improve diagnostics and thereby drive down opex, we have enabled remote packet captures and ETW traces in near real-time, without a need to log on to the destination machine.
We have also enabled basic management of physical switches (with standards-based schemas) using PowerShell and VMM, thereby making it possible to automate certain diagnostics across the hosts and into the physical network. Finally, deployment and management are now entirely automated via VMM, with tenant self service via the Windows Azure Pack (as shown in Figure 1).
For a deeper dive on the network virtualization enhancements in R2, check out this recent post on the Networking Blog.
Windows Server 2012 IP Address Management (IPAM) solution supported core IP address, DHCP and DNS management functions. With the R2 release, IPAM has implemented several major enhancement, such as:
There are also enhancements to the addressing and naming services themselves, including per-zone query metrics support in DNS, and FQDN-based DHCP policies. The use of IPAM in a hoster datacenter is depicted in Figure 1.
Our recent TechNet article on IPAM in Windows Server 2012 R2 describes these enhancements in more detail.
Now, to illustrate how these technologies have been built and bundled together to serve as solutions for common scenarios encountered by IT pros and service providers, the following three scenarios show how hybrid networking can make an infrastructure environment more powerful, scalable, resilient, and manageable.
We described the integrated approach we have taken in enabling our customers to deploy an IaaS solution in an earlier post in this R2 series, and as service providers and enterprises deploy IaaS, their customers in turn deploy their VM’s onto this capacity.
Traditionally these VM’s were provided IP addresses from a service provider network address space thereby forcing customers to renumber their VMs and rethink their network environments. With the availability of network virtualization, customers can create their own virtualized network on top of the service provider network infrastructure, utilize their own private IP numbering plan within the virtualized network, and connect it to their on-premises network to create an extended network that spans the on-prem infrastructure and the service provider cloud.
This approach dramatically simplifies the migration of workloads to the cloud and back.
This scenario is enabled by a combination of two key aspects: First, network virtualization, which enables the service provider to run multiple tenant networks with overlapping IP addresses on the same physical infrastructure. Second, the capability to connect the customer’s virtualized network in the service provider cloud back to the customer’s on-premises network to form one extended network.
To enable this deployment, the service provider provisions the network infrastructure and a set of S2S VPN gateways to support the creation of customer (virtualized) networks and S2S connections. Next, the customer self-provisions his virtualized network and S2S connections using the Windows Azure pack portal.
Now, let’s look at each step of this scenario in detail:
The Windows Azure Portal screenshot snippets below illustrate these steps.
First, the customer specifies the subnet in his private address space where the S2S gateway should be located, and (optionally) enables BGP. The customer also specifies the address of the designated DNS server to be used by applications running in his virtualized network. This could be a server located in the customer’s virtualized network or in the on-premises network, or it could be a server made available by the service provider.
He next specifies the IP address space to be used for the virtualized network. This is typically from the customer’s private IP address space.
The customer then specifies the BGP parameters, which include the Autonomous System Number (ASN) of the virtualized network, the peer BGP IP address, and the ASN of the on-premises network.
This results in BGP configuration via VMM being triggered on the service provider side (see screenshot above). BGP may also be configured by the admin using a script as described here, which allows more detailed configuration.
Finally, the customer specifies a name for the on-premises site, the public IP address of the on-premises S2S gateway, and the IP addresses that are reachable on the on-premises site over the S2S connection. The last two parameters are required to set up S2S connection on-demand from the service provider side, and to determine which destinations in the extended network are in the on-premises side (if BGP is not enabled).
After completing these tasks, the customer deploys a S2S gateway at the on-premises site that connects to the service provider cloud. This could be an existing 3rd party edge device, or a Windows Server 2012 R2 S2S gateway.
In the former case, the customer uses vendor-specific commands to set up the S2S connection. In the latter case, customers can use the following script template to automate the configuration of the gateway:
############### Macros for RRAS Config on Ent GW ##############################
$S2SDestination = "100.0.0.6"
$IPv4Subnet = "172.23.90.4/32:100"
$BGPPeerAddress = "172.23.90.10"
$BGPAddress = "172.23.90.4"
$BGP_PeerASN = 64522
$BGP_LocalASN = 64512
################### Install S2S VPN on Ent GW ##############################
New-VM -Name GW -MemoryStartupBytes 2.5GB -SwitchName Internet -VHDPath "E:\GW-VMs\GW.vhd" -Path "E:\VM"
Add-WindowsFeature -Name Remoteaccess -IncludeAllSubFeature -IncludeManagementTools
ipmo remoteaccess
Install-RemoteAccess -VpnType VpnS2S
############## Configure S2S VPN on Ent GW ##############################
Add-VpnS2SInterface -Name "-Tunnel" -Protocol IKEv2 -Destination $S2SDestination -AuthenticationMethod PSKOnly -SharedSecret 111_aaa -Persistent -IPv4Subnet "172.23.90.10/32:100" -NumberOfTries 0
############### Configure BGP on Ent GW ##############################
#### Add BGP Router ####
Add-BgpRouter -BgpIdentifier $BGPAddress -LocalASN $BGP_LocalASN
#### Add BGP Peers ####
Add-BgpPeer -Name "ServiceProviderSite1" -LocalIPAddress $BGPAddress -PeerIPAddress $BGPPeerAddress -PeerASN $BGP_PeerASN
#### Add custom networks for advertisements to peers ####
Add-BgpCustomRoute -Interface "Intranet"
## End Config Ent GW - ##
Figure 2
In the former case, the internal router learns external routes (10.2.1.0/24) from the edge device, and it must distribute these routes to other on-premises routers using an IGP. In the latter case, the edge device itself must distribute the external routes to other on-premises routers using the IGP it runs.
A variation of this topology where Windows Server 2012 R2 gateway is deployed in the customer site is shown in Figure 3 (see below).
Figure 3
Applications or services running in the customer’s virtualized network may access resources located on the Internet such as public DNS servers, web services, etc. This is a common requirement in many deployments. There are two options to enable this scenario:
The R2 release supports both options.
The Service provider can configure the NAT function in the S2S VPN gateway to enable direct Internet access from the customer’s virtualized network. The NAT function in the Windows Server 2012 R2 gateway is tenant-aware and allows VMs in multiple customer networks with overlapping IP addresses to access the same Internet IP address.
Customers can activate NAT for their virtualized networks using the Windows Azure Pack-based self-service portal. If S2S VPN and NAT are enabled for a virtual network, BGP enables learning of customer routes on the gateway so that only Internet traffic is NAT’ed.
There are often instances where resources available on a customer’s virtualized network need to be made accessible to users who are outside the customer’s premises.
Examples of this include disaster recovery, whereby the backup or recovery VM run in the cloud and need to be made accessible to users after a disaster situation. More routinely, a service running in the cloud may need to be made available to users who are working off-premises, including from public hotspots. Similarly, an administrator might want to login to a VM deployed in the cloud from anywhere.
To support this scenario, the service provider can enable access to VMs or services in a customer’s virtualized network from different devices over the Internet using the remote access VPN feature built into the Windows Server 2012 R2 S2S VPN gateway.
Users belonging to the customer’s organization can connect to their VMs or services in the cloud via the multitenant S2S gateway using industry standard IPsec/IKEv2 VPN connections from various devices like laptops, tablets and smartphones. If the devices are behind a firewall or router that does not allow IPsec traffic, SSTP (SSL) VPN can be used to connect to the gateway. SSTP VPN uses https (port 443) to connect to the gateway and can traverse firewalls that block any traffic that is not http/https.
The service provider can also enable remote access VPN for multiple customers on the same gateway with a single public IP address with all connections using port 443. Multiple customers can use overlapping IP addresses for VPN clients and the R2 multi-tenancy feature enables traffic isolation. All the VPN client connections are authenticated by the gateway directly or by using a RADIUS server in the service provider network.
After a user connects over VPN, he can access VMs in the appropriate virtualized network in the cloud as well as networks on-premises sites over S2S VPN (if provisioned). The service provider can easily enable multitenant VPN access in the S2S VPN gateway using a PowerShell script as described here. VMM support for this is not available in the R2 release.
The following figure shows the cmdlets used to enable VPN for customers Contoso and Fabrikam:
PS C:\> Enable-RemoteAccessRoutingDomain -Name “Contoso” -Type Vpn
PS C:\> Enable-RemoteAccessRoutingDomain -Name “Fabrikam” -Type Vpn
PS C:\> Set-RemoteAccessRoutingDomain –Name “Fabrikam” –IPAddressRange 11.11.11.1, 11.11.11.200 –TenantName “Fabrikam”
PS C:\> Set-RemoteAccessRoutingDomain –Name “Contoso” –IPAddressRange 11.11.11.1, 11.11.11.200 –TenantName “Contoso”
Windows Server 2012 R2 and System Center 2012 R2 provide a set of advanced capabilities for service providers to implement hybrid networking cost-effectively, reliably, and at scale. This includes multitenant S2S connectivity, NAT, and remote access VPN. In conjunction with the Windows Azure Pack, VMM, and PowerShell scripting – service providers can easily automate the on-boarding of customers, as well as set up and manage all hybrid networking functions.
Tomorrow we’ll cover another key part of a hybrid IT environment: Protecting your data with an enterprise-grade disaster recovery solution. It’s an under-served part of the industry, and I believe we have addressed it proactively with some very impressive features.
To get even deeper on the topics covered in this post, check out the engineering posts and TechEd sessions included below.
As some of you may have already noticed, earlier this morning Microsoft announced that Windows 8.1 will be available to consumers and businesses worldwide on October 18, 2013.
But before you start your 8.1 party (with a DJ in a data center, for example), there’s even more good news:
I'm excited to announce that, on the same day, eligible customers will be able to download Windows Server 2012 R2 and System Center 2012 R2, as well as the latest update to Windows Intune! We’ll make evaluation versions available through the TechNet Evaluation Center, and these products will be available for new purchases when they hit the price list on November 1st.
Is it a coincidence that this will be 501st anniversary of Michelangelo exhibiting the ceiling of the Sistine Chapel for the first time? If you love great works of art, then it’s up to you to decide.
Just a few weeks ago I wrote about the common planning efforts and engineering milestones we shared during the development of this R2 wave – and seeing these products and services become generally available on the same day (with all their integrated scenarios!) is the ultimate benefit!
I encourage you to keep learning about the 2012 R2 releases via our “What’s New in 2012 R2” series on this blog, and also download the preview bits if you haven’t already done it.
Part 6 of a 9-part series.
Leaders in any industry have one primary responsibility in common: Sifting through the noise to identify the right areas to focus on and invest their organization’s time, money, and people. This was especially true during our planning for the 2012 R2 wave of products; and this planning identified four key areas of investment where we focused all our resources.
These areas of focus were the consumerization of IT, the move to the cloud, the explosion of data, and new modern business applications. To enable our partners and customers to capitalize on these four areas, we developed our Cloud OS strategy, and it immediately became obvious to us that each of those focus areas relied on consistent identity management in order to operate at an enterprise level.
For example, the consumerization of IT would be impossible without the ability to verify and manage the user’s identity and devices; an organization’s move to the cloud wouldn’t be nearly as secure and dynamic without the ability to manage access and connect people to cloud-based resources based on their unique needs; the explosion of data would be useless without the ability to make sure the right data is accessible to the right people; and new cloud-based apps need to govern and manage access just like applications always have.
In the 13+ years since the original Active Directory product launched with Windows 2000, it has grown to become the default identity management and access-control solution for over 95% of organizations around the world. But, as organizations move to the cloud, their identity and access control also need to move to the cloud. As companies rely more and more on SaaS-based applications, as the range of cloud-connected devices being used to access corporate assets continue to grow, and as more hosted and public cloud capacity is used companies must expand their identity solutions to the cloud.
Simply put, hybrid identity management is foundational for enterprise computing going forward.
With this in mind, we set out to build a solution in advance of these requirements to put our customers and partners at a competitive advantage.
To build this solution, we started with our “Cloud first” design principle. To meet the needs of enterprises working in the cloud, we built a solution that took the power and proven capabilities of Active Director and combined it with the flexibility and scalability of Windows Azure. The outcome is the predictably named Windows Azure Active Directory.
By cloud optimizing Active Directory, enterprises can stretch their identity and access management to the cloud and better manage, govern, and ensure compliance throughout every corner of their organization, as well as across all their utilized resources.
This can take the form of seemingly simple processes (albeit very complex behind the scenes) like single sign-on which is a massive time and energy saver for a workforce that uses multiple devices and multiple applications per person. It can also enable the scenario where a user’s customized and personalized experience can follow them from device to device regardless of when and where they’re working. Activities like these are simply impossible without a scalable, cloud-based identity management system.
If anyone doubts how serious and enterprise-ready Windows Azure AD already is, consider these facts:
Windows Azure AD is battle tested, battle hardened, and many other verbs preceded by the word “battle.”
But, perhaps even more importantly, Windows Azure AD is something Microsoft has bet its own business on: Both Office 365 (the fastest growing product in Microsoft history) and Windows Intune authenticate every user and device with Windows Azure AD.
In this post, Vijay Tewari (Principle Program Manager for Windows Server & System Center), Alex Simons (Director of Program Management for Active Directory), Sam Devasahayam (Principle Program Management Lead for Windows Azure AD), and Mark Wahl (Principle Program Manager for Active Directory) take a look at one of R2’s most innovative features, Hybrid Identity Management.
As always in this series, check out the “Next Steps” at the bottom of this post for links to wide range of engineering content with deep, technical overviews of the concepts examined in this post.
Today’s hybrid IT environment dictates that customers have the ability to consume resources from on-premises infrastructure, as well as those offered by service providers and Windows Azure. Identity is a critical element that is needed to provide seamless experiences when users access these resources. The key to this is using an identity management system that enables the use of the same identities across providers.
Previously on the Active Directory blog, the Active Directory team has discussed “What’s New in Active Directory in Windows Server 2012 R2,” as well as the features which support “People-centric IT Scenarios.” These PCIT scenarios enable organizations to provide users with secure access to files on personal devices, and further control access to corporate resources on premises.
In this post, we’ll cover the investments we have made in the Active Directory family of products and services. These products dramatically simplify Hybrid IT and enable organizations to have a consistent management of services using the same identities for both on-premises and the cloud.
First, let’s start with some background.
Today, Active Directory in Windows Server (Windows Server AD) is widely adopted across organizations worldwide, and it provides the common identity fabric across users, devices and their applications. This enables seamless access for end users – whether they are accessing a file server from their domain joined computer, or accessing email or documents on a SharePoint server. It also allows IT to set access policies on resources, and is the foundation for Exchange and many other enterprise critical capabilities.
Today’s Hybrid IT world is focused on driving efficiencies in infrastructure services. As a result we see organizations move more application workloads to a virtualized environment. Windows Azure provides infrastructure services to spin up new Windows Server machines within minutes and make adjustments as usage needs change. Windows Azure also enables you to extend your enterprise network with Windows Azure Virtual Network. With this, when applications that rely on Windows Server AD need to be brought into the cloud, it is possible to locate additional domain controllers on Windows Azure Virtual Network to reduce network lag, improve redundancy, and provide domain services to virtualized workloads.
One scenario that has already been delivered (starting with Windows Server 2012) is enabling Windows Server 2012’s Active Directory Domain Services role to be run within a virtual machine on Windows Azure.
You can evaluate this scenario via this tutorial and create a new Active Directory forest in servers hosted on Windows Azure Virtual Machines. You can also review these guidelines for deploying Windows Server AD on Windows Azure Virtual Machines.
We have also been building a new set of features into Windows Azure itself – Windows Azure Active Directory. Windows Azure Active Directory (Windows Azure AD) is your organization’s cloud directory. This means that you can decide who your users are, what information to keep in the cloud, who can use or manage that information, and what applications or services are allowed to access it.
Windows Azure AD is implemented as a cloud-scale service in Microsoft data centers around the world, and it has been exhaustively architected to meet the needs of modern cloud-based applications. It provides directory, identity management, and access control capabilities for cloud applications.
Managing access to applications is a key scenario, so both single tenant and multi-tenant SaaS apps are first class citizens in the directory. Applications can be easily registered in your Windows Azure AD directory and granted access rights to use your organization’s identities. If you are a developer for a cloud ISV, you can register a multi-tenant SaaS app you've created in your Windows Azure AD directory and easily make it available for use by any other organization with a Windows Azure AD directory. We provide REST services and SDKs in many languages to make Windows Azure AD integration easy for you to enable your applications to use organizational identities.
This model powers the common identity of users across Windows Azure, Microsoft Office 365, Dynamics CRM Online, Windows Intune, and third party cloud services (see diagram below).
For those of you who already have a Windows Server AD deployment, you are probably wondering “What does Windows Azure AD provide?” and “How do I integrate with my own AD environment?” The answer is simple: Windows Azure AD complements and integrates with your existing Windows Server AD.
Windows Azure AD complements Windows Server AD for authentication and access control in cloud-hosted applications. Organizations which have Windows Server Active Directory in their data centers can connect their domains with their Windows Azure AD. Once the identities are in Windows Azure AD, it is easy to develop ASP.NET applications integrated with Windows Azure AD. It is also simple to provide single sign on and control access to other SaaS apps such as Box.com, Salesforce.com, Concur, Dropbox, Google Apps/Gmail. Users can also easily enable multi-factor authentication to improve security and compliance without needing to deploy or manage additional servers on-premises.
The benefit of connecting Windows Server AD to Windows Azure AD is consistency – specifically, consistent authentication for users so that they can continue with their existing credentials and will not need to perform additional authentications or remember supplementary credentials. Windows Azure AD also provides consistent identity. This means that as users are added and removed in Windows Server AD, they will automatically gain and lose access to applications backed by Windows Azure AD.
Because Windows Azure AD provides the underlying identity layer for Windows Azure, this ensures an organization can control who among their in-house developers, IT staff, and operators can access their Windows Azure Management Portal. In this scenario, these users do not need to remember a different set of credentials for Windows Azure because the same set of credentials are used across their PC, work network, and Windows Azure.
Connecting your organization’s Windows Server AD to Windows Azure AD is a three-step process.
First, your organization may already have Windows Azure AD. If you have subscribed to Office365 or Windows Intune, your users are automatically stored in Windows Azure AD and you can manage them from the Windows Azure Management Portal by signing in as your organization’s administrator and adding a Windows Azure subscription.
This video explains how to use an existing Windows Azure AD tenant with Windows Azure:
If you do not have a subscription to one of these services, you can create a new Windows Azure AD tenant by following this link to sign up for Windows Azure as an organization.
Once you sign up for Windows Azure, sign in as the new user for your tenant (e.g., “user@example.onmicrosoft.com”), and pick a Windows Azure subscription. You will then have a tenant in Windows Azure AD which you can manage.
When logged into the Windows Azure Management Portal, go to the “Active Directory” item and you will see your directory.
Next, you can bring in your users from your existing AD domains. This process is outlined in detail in the directory integration roadmap.
After clicking the “Active Directory” tab, select the directory tenant which you are managing. Then, select the “Directory Integration” option.
Once you enable integration, you can download the Windows Azure Active Directory Sync tool from the Windows Azure Management portal, which will then copy the users into Windows Azure AD and continue to keep them updated.
Finally, for authentication we’ve made it simple to provide consistent password-based authentication across both domains and Windows Azure AD. We do this with a new password hash sync feature.
Password hash sync is great because users can sign on to Windows Azure with the password that they already use to login to their desktop or other applications that are integrated with Windows Server AD. Also, as the Windows Azure Management portal is integrated with Windows Azure AD, it supports single sign-on with an organization’s on-premises Windows Server AD.
If you wish to enable users to automatically obtain access to Windows Azure without needing to sign in again, you can use Active Directory Federation Services (AD FS) to federate the sign-on process between Windows Server Active Directory and Windows Azure AD.
In Windows Server 2012 R2, we’ve made series of improvements to AD FS to support Hybrid IT.
We blogged about it recently in the context of People Centric IT in this post, but the same concepts of risk management apply to any resource that is protected by Windows Azure AD. AD FS in Windows Server 2012 R2 includes deployment enhancements that enable customers to reduce their infrastructure footprint by deploying AD FS on domain controllers, and it supports more geo load-balanced configurations.
AD FS includes additional pre-requisite checking, it permits group-managed service accounts to reduce downtime, and it offers enhanced sign-in experiences that provide a seamless experience for users accessing Windows Azure AD based services.
AD FS also implements new protocols (such as OAuth) that deliver consistent development interfaces for building applications that integrate with Windows Server AD and with Windows Azure AD. This makes it easy to deploy an application on-premises or on Windows Azure.
For organizations that have deployed third-party federation already, Shibboleth and other third-party identity providers are also supported by Windows Azure AD for federation to enable single sign-on for Windows Azure users.
Once your organization has a Windows Azure AD tenant, by following those steps your organization’s users will be able to seamlessly interact in the Windows Azure management, as well as in other Microsoft and third-party cloud services. And all of this can be done with the same credentials and authentication experiences which they have with their existing Windows Server Active Directory.
As IT organizations evolve to support resources that are beyond their data centers, Windows Azure AD, the Windows Server AD enhancements in Windows Server 2012, and Windows Server 2012 R2 provide seamless access to these resources.
In the coming weeks you will see more details of the Active Directory enhancements in Windows Azure and in Windows Server 2012 R2 on the Active Directory blog.
This post is just the first of three Hybrid IT posts that this “What’s New in 2012 R2” series will cover. Next week, watch for two more that cover hybrid networking and disaster recovery. If you have any questions about this topic, don’t hesitate to leave a question in the comment section below, or get in touch with me via Twitter @InTheCloudMSFT.
To learn more about the topics covered in this post, check out the following articles. You can also obtain a Windows Azure AD directory by signing up for Windows Azure as an organization.
Part 5 of a 9-part series. Today’s post is the 2nd of two sections; to read the first half, click here.
I recently had an opportunity to speak with a number of leaders from the former VMWare User Group (VMUG), and it was an incredibly educational experience. I say “former” because many of the VMUG user group chapters are updating their focus/charter and are renaming themselves the Virtual Technology User Group (VTUG). This change is a direct result of how they see market share and industry momentum moving to solutions like the consistent clouds developed by Microsoft.
In a recent follow up conversation with these leaders, I asked them to describe some common topics they hear discussed in their meetings. One of the leaders commented that the community is saying something really specific: “If you want to have job security and a high paying job for the next 10 years, you better be on your way to becoming an expert in the Microsoft clouds. That is where this industry is going.”
When I look at what is delivered in these R2 releases, the innovation is just staggering. This industry-leading innovation – the types of technical advances that VTUG groups are confidently betting on – is really exciting.
With this innovation in mind, in today’s post I want to discuss some of the work we are doing around the user experience for the teams creating the services that are offered, and I want to examine the experience that can be offered to the consumer of the cloud (i.e. the tenants). While we were developing R2, we spent a lot of time ensuring that we truly understood exactly who would be using our solutions. We exhaustively researched their needs, their motivations, and how various IT users and IT teams relate to each other. This process was incredibly important because these individuals and teams all have very different needs – and we were committed to supporting all of them.
The R2 wave of products have been built with this understanding. The IT teams actually building and operating a cloud(s) have very different needs than individuals who are consuming the cloud (tenants). The experience for the infrastructure teams will focus on just that – the infrastructure; the experience for the tenants will focus on the applications/ services and their seamless operation and maintenance.
In yesterday’s post we focused heavily on the innovations in these R2 releases in the infrastructure – storage, network, and compute – and, in this post, Erin Chapple, a Partner Group Program Manager in the Windows Server & System Center team, will provide an in-depth look at Service Provider and Tenant experience and innovations with Windows Server 2012 R2, System Center 2012 R2, and the new features in Windows Azure Pack.
As always in this series, check out the “Next Steps” at the bottom of this post for links to a variety of engineering content with hyper-technical overviews of the concepts examined in this post. Also, if you haven’t started your own evaluation of the 2012 R2 previews, visit the TechNet Evaluation Center and take a test drive today!
We focus on delivering the best infrastructure possible in order to provide delightful experiences to our customers. The two work hand-in-hand: The right infrastructure enables key customer-facing scenarios, and the focus on the experience ensures that customers can get the most out of their infrastructure investments.
In this release, we focused on two core personas: the Service Provider who is responsible for deploying and operating the IaaS, and the tenant (or consumer) who consumes those services provided by the Service Provider.
With Windows Server 2012, System Center 2012 SP1, and Windows Azure Pack v1, we established the foundation for IaaS: A self-service portal on top of resource pools. To determine which enhancements were necessary for the R2 wave, we spent time with customers (ranging from enterprises to Service Providers, to groups within Microsoft responsible for IaaS-type services) to better understand what they needed in order to deliver an end-to-end IaaS experience. Three main pieces of feedback emerged:
These learnings helped crystallize our core customer vision for the Service Provider in the 2012 R2 release:
Enable Service Providers with a rich IaaS platform that seamlessly integrates with existing systems and processes within the datacenter and has rich self-service experiences while having the lowest COGS.
This vision defined the key scenario areas we targeted:
Success for a Service Provider business largely hinges on the ability to attract and retain tenants. It therefore falls to the Service Provider to think about how to use service offerings to attract tenants; to consider different tactics for differentiation, as well as ongoing efforts like upselling and retention to maintain healthy tenant accounts. To help Service Providers meet these challenges, we have invested in key enhancements to the service management experience targeting these specific areas:
Service Providers can build bundles of many different service offers, which are often called “Plans.” Plans include various services that can be assembled together in order to create subscriber-specific value offerings. Tenants then consume an offer by subscribing to a plan. In a very general sense, a cloud is nothing more to the consumer (in this case, the tenant) than a set of capabilities (services) at some capacity (quotas). When a service provider creates offers, they need to know what types of workloads customers want (which services to include) and how they will be consumed – as well as some basic intuition about the consumption habits of their tenants (how much will they need, and how fast will that change, etc.).
We designed an easy-to-use experience for creating offers, selecting the kinds of services or capabilities to include, and setting the quotas to control how much can be consumed by any single subscription. But, obviously, it goes beyond a simple set of compute, storage, and networking capabilities at some quota amount. One of the most important aspects of offer construction is the process of including library content to facilitate simplified application development. For that reason, the offer construction experience also features a way to include templates for foundational VM configurations and workloads.
Armed with the ability to attract tenants to the service through precise service offerings, the Service Provider now needs a way to focus on the quality of the tenant experience. This can be either for the purpose of driving margin growth (in the case of pubic hosting), or customer satisfaction initiatives (public or private), or both. To achieve this, we introduced the concept of an add-on that gives the service provider a more precise mechanism for exposing offers. Plan add-ons are usually targeted at specific plans or tenants, and they are used to drive up-sell opportunities. For example, Service Providers can create a plan add-on called “Peak Threshold Quota Expansion” that can be targeted towards subscribers who show seasonality in their consumption patterns.
Lastly, Service Providers need a way to manage the accounts and subscriptions of their tenants. The motivations for direct management of accounts and subscriptions can vary from white-glove service, to loyalty programs, and rewards to account health/delinquency, and the need to maintain health of the shared environment for all tenants.
The features for Service Providers are high-level, but provide comprehensive capabilities to cover a variety of scenarios, including:
One of the design goals of the R2 release is to provide a consistent experience for tenants across private, hosted and Windows Azure public clouds. As part of the new Web Sites and Virtual Machines service offerings in Windows Azure, we launched a modern, web standards-based, device-friendly web portal for our Windows Azure customers. The Windows Azure portal has received rave reviews and has dramatically eased the manageability of the cloud services. We heard from our customers that they would like the similar capabilities in the Windows Azure Pack portal, which allows them to change the various visual elements such as colors, fonts, images, and logos. They also wanted the portal to enable them to add new services that would help them differentiate, while staying consistent with the overall experience.
In the R2 release, the same great experience in Windows Azure is now available on Windows Server for our customers through Windows Azure Pack. This Self-Service Tenant Portal has been designed with the following capabilities.
While these capabilities offer a great in-the-box experience that is consistent with Windows Azure, all these capabilities are also available through an API for customers who want to build their own self-service portal. To facilitate your efforts to build and develop your own self-service portal, in September we will share the Windows Azure Pack Tenant self-service portal source code that can be leveraged as a sample. Upcoming blog posts will go into greater detail on this experience.
Customers would like the tenant-facing portal to reflect the brand that their business represents. Therefore, it is very essential that the portal offers the customers the ability to customize the look and feel of the portal to reflect their choice of colors, fonts, logos, and various other artifacts that represent the brand. To enable this scenario, the Windows Azure Pack Self-Service Tenant Portal has been designed from the ground up with cloud services in mind, and has been updated to allow our partners and customers to adapt it to their business needs.
The Self-Service Tenant Portal enables easy customization with your theme and brand, a custom login experience, and banners. The sample kit contains CSS files to easily override the default images, logos, colors, and the like.
As new services are introduced, the portal can light up these services easily. This capability is possible because the framework uses REST APIs and scales to a large number of services easily.
For example, the ability to provide custom domains is a very common need for service providers. The self-service framework allows the service provider to include these value-added services to the framework easily and in a format that makes them ready for their tenants to consume.
In the example seen in Figure 5 (see below), “Web Site Domains” is a new Resource Provider, providing custom domains. When configured, the portal lights up with this capability, allowing the tenants to subscribe to the offer.
Figure 5: Add-on services.
The ability to differentiate the tenant experience is a key strategy for many service providers, and to support such scenarios the Tenant Portal source code is provided as mentioned earlier. This enables the service provider to use the Tenant Portal as a sample and to use the Service Management API’s to integrate the experience with their own portal.
Running a data center is a complex operation in which many different systems and processes need to be aligned to achieve efficiencies at cloud scale. Automating key workflows, therefore, becomes an essential part of the data center operations. Automation capabilities have been part of our cloud solutions for a long time – System Center Orchestrator has enabled data center administrators to encapsulate complex tasks using runbooks for years, and it helps data center admins reap the benefits of automation with ease. With the 2012 release of System Center, there is now tighter integration between Service Manager and Orchestrator which enables the self-service scenarios powered by automation.
Our goals with automation have always been to enable our customers to drive value within their organization by:
Another key area of investment within 2012 R2 is Service Management Automation, which integrates into the Windows Azure Portal and enables operations exposed through the self-service portal (and via the Service Management API) to be automated using PowerShell modules.
Service Management Automation (SMA) taps into the power and popularity of Windows PowerShell. Specifically, Windows PowerShell encapsulates automation tasks while SMA builds workflows on top of it and provides a user interface for managing the workflows in the portal. This allows the co-ordination of IT-based activities (represented as PowerShell cmdlets), and it allows you to create IT processes (called runbooks) through the assembly of various PowerShell cmdlets.
In Figure 6 (see below), you can see that the Automation Service is represented in the WAP as a core resource termed “Automation.” This diagram also depicts a variety of potential integration end-points that can participate in IT-coordinated workflows as shown in the figure below.
Figure 6: Overview of Service Management Automation.
Automating tasks that are manual, error prone, and often repeated lowers costs and enables providers to focus on work that adds business value. Windows PowerShell encapsulates automation tasks and SMA builds workflows on top of it, thus providing a user interface for managing the workflows in the portal. By building on top of the Windows PowerShell framework, we are enabling Service Providers to leverage their existing investments in Windows PowerShell cmdlets, and we are also making it easy for them to continue to reap the benefits of automation.
Service reliability can be vastly improved by ensuring that most error prone, manual, and complex processes are encapsulated in a workflow that is easy to author, operate and administer. Orchestrating these workflows across multiple tools and systems improves service reliability.
When we talked to service providers and enterprises, it was clear that providers have complex processes and multiple systems within their IT infrastructure. Service providers have often invested a lot in user onboarding, provisioning, de-provisioning, and subscriber management processes – and they have many different systems that need to be aligned during each of these processes.
In R2, we targeted investments to enable these scenarios. For example, Windows Azure Pack’s event generation framework generates events of various types, including VM start/stop, plan subscription, and new user creation. These events can be integrated with workflows using the SMA user interface in the Windows Azure Pack portal. Now you get the benefit of automation with process integration – and with it come repeatability and predictability. These events can then be integrated with workflows using the SMA user interface in the Windows Azure Pack portal.
In summary, SMA is about reducing costs by encapsulating complex, error prone, manual and repetitive tasks into runbooks which can be used in automation and, where/when appropriate, use the same patterns to integrate with other systems that need to participate in complex processes within the data center.
The cloud operating model requires providers to track tenant resource utilization and be able to bill or charge only for what was used by the tenant.
In the 2012 R2 release, we made targeted investments in this area. To begin with, there is a REST Usage API, which provides resource utilization data (at hourly fidelity) for each subscription. Providers use this API to extract utilization data and integrate the data feed with their own billing system to create the billing reports. In addition to the Usage API, we also provide Usage Reports in Excel that provide analytics and trending information. This is very useful for capacity planning based on resource consumption trends.
The intention and design of the Usage Metering system in R2 is to collect and aggregate all the usage data across all the resource providers and expose the usage data via the REST Usage API. The Usage API is the only way to extract the data from the Usage Metering System. Most Service Providers have a billing system that they use today and this is used to generate monthly bills to subscribers. Using this API, Service Providers can easily integrate the Tenant Resource Utilization with their existing billing system. The blog post “How to integrate your Billing System with the Usage Metering System” goes into detail regarding how to leverage the API and the samples to create a billing adaptor. Doing this helps integrate the billing system with the Usage Metering system.
It is very important for Service Providers to understand how their tenants consume the offers they provide. In R2, we provide out-of-the box data warehousing capabilities that correlate subscriptions with usage across VMs as well as analytical reports. Excel is the most widely used tool when it comes to reporting, thus, with this popularity in mind, we designed the Usage reports to be Excel friendly.
In Figure 7 (see below), the usage report shows VM Usage data in hourly granularity for all subscribers. The filter allows you to scope the data to selected subscribers for the selected date ranges.
Figure 7: Usage Report.
Though Excel reports are very powerful, Service Providers also asked for a dashboard showing all the key usage metrics in order to give a “glance-able” indication of overall heath. The dashboard capabilities of SharePoint are very useful when a lot of people within an organization need to view the key performance indicators for a business. For a Service Provider, the top-line revenue can be measured by of how many of these services are used by their tenants, and then understanding which of the subscribers drive their business. For such scenarios, Usage dashboards are very critical and provide a convenient way to both consume and perform drill through analytics if desired.
In Figure 8 (see below), VM runtime statistics are displayed in four key dimensions:
Figure 8: Usage dashboard experience.
As you can see, the key metrics for the business are available at a glance. If further details are needed, a simple drill-through experience allows the user to select a particular chart and hone in on the details that compose the chart. This all leads to a powerful self-service analytics experience.
Service Providers need to stay compliant within the Service Provider Licensing Agreement (SPLA), and, in a quickly changing data center, keeping track of server and host inventory for licensing needs can be very difficult. This feedback was commonly shared by Service Providers, and we have made a series of key investments to make this entire process easier for them to execute.
In R2 we introduce the Server Inventory report as a feature of Service Reporting component, which tracks all the servers and the VMs. The SPLA requires the service providers to compute the license costs for Microsoft software at the end of the month. The formula for calculating these licensing costs includes the edition of Windows Server OS, the processor count, and the maximum number of Windows Virtual Machines that were hosted on the severs for that month.
To assist in this scenario, we provide an out-of-the-box Server Inventory report that processes all the calculations and displays the information for easy consumption. Figure 9 below shows a report where the processor count and the VM Instance count are displayed for the selected months.
Figure 9: Sample Server Inventory Report.
One of the most common concerns of Service Providers is the need to be able to look back at this history and accurately compare key performance indicators (KPI) across various time dimensions to better understand growth patterns. To support such a scenario, we have developed the licensing report on top of a data warehousing module. Based on the report below, for example, it is very clear to see that the computing and resource capacity used by the consumer grew over that last monthly cycle. The reporting system keeps a cumulative count of resources, and uses this information in determining compliance with licensing.
Requirements like these also exist in licensing scenarios. To support such a scenario, we have developed the licensing report on top of the data warehousing module. As noted in Figure 9, it is possible to observe the growth of processors and VM instances.
The ability to surface data aggregated over time becomes a very powerful auditing tool as well. In R2, the default storage is for three years; this allows the provider to go back in history and understand the SPLA compliance status for the Windows Servers managed by the R2 stack.
As mentioned above, a key design goal of the 2012 R2 wave was to provide a consistent experience for tenants across private, hosted and Windows Azure public clouds. We achieved this by delivering a consistent framework and tool set for running modern cloud services. tenants can now run and operate a rich set of cloud services in partner-provided data centers just as easily as they can by using Windows Azure. In short, the core vision for the tenant administrator experience in the R2 release is to:
Provide a rich self-service experience that enables tenants to self-provision and scale applications in an Azure-consistent manner
This vision defined our target scenarios:
Windows Azure Pack includes a Self-Service Tenant Portal and a set of REST management APIs. The portal is deployed and operated by the Service Provider. Tenants use it to manage the services and infrastructure that are operated by the Service Provider. The Self-Service Tenant Portal is a companion portal to the Provider portal and can only be deployed and used if the operator has configured their environment with the Provider portal or used the Provider REST APIs.
Figure 10(see below) illustrates the high-level technologies of Windows Azure Pack, and it compares the layering of these technologies in Windows Azure with Windows Azure Pack running on Windows Server 2012 R2.
Figure 10: Comparison of Windows Azure Pack technologies running in Windows Azure and Windows Server.
Because the Self-Service Tenant Portal is based on the same framework used by Windows Azure, the same rich dev-op experience originally developed for Windows Azure Websites (described in the next scenario) is available in partner data centers using Windows Server and System Center 2012 R2.
One of the new Platform-as-a-Service (PaaS) services available in Windows Azure is Windows Azure Websites. Rather than a traditional IIS web hosting, this is instead a true elastic cloud service for provisioning and scaling website applications. It offers a rich dev-ops management experience for running and scaling the website, as well as deep integration with popular open source code-control solutions such as GIT.
As part of our effort to create a consistent experience across clouds, we invested to bring this modern website PaaS service from Windows Azure and run it natively on Windows Server. The end result is a set of REST APIs for consumers along with a management experience which is consistent with Windows Azure.
Figure 11: Self-Service Portal experience on Windows Azure and Windows Server.
As you can see in Figure 11, the Self-Service Portal experience is very similar. You’ll notice right away that the color scheme is different between Windows Azure and Windows Server. As mentioned earlier in the Service Provider Experience section, the Self-Service Portal is a customizable solution that can be themed and re-branded to suit the needs of the enterprise. In this example, we’ve applied a different theme to emphasize that the Self-Service Portal running on Windows Server is a different instance from the one running in Windows Azure.
Another difference is that the Self Service Portal exposes only the services the Service Provider has included in the plan the tenant is using. For example, if a tenant has only subscribed to IaaS (including only Virtual Machines and Networking), only those two services would be presented in the tenant portal as shown in Figure 12 (see below).
Figure 12: Self-Service Portal IaaS experience.
However, if the tenant subscription included all the services included in Windows Azure Pack and provided by System Center 2012 R2, the portal would look like Figure 13 (see below).
Figure 13: Self-Service Portal full experience.
Each tenant has a unique subscription, and the experience is tailored to the services provided on a per subscription basis.
Service Providers often ask about VM provisioning within the data center. The way this works is simple: Service Providers define hosting plans with resource quotas. These quotes then define where in the data center a resource is provisioned. This location then determines the amount of capacity that a customer can self-provision.
In order to enable self-provisioning of scalable tenant capacity, we introduced a new service model in System Center 2012 R2: Virtual Machine Roles. These are a tier of VMs that operate as a singleton. VMs in the tier exhibit a set of cloud attributes that can be scaled, operated upon, and treated as a single entity within the portal environment.
Service Providers publish Virtual Machine Roles via the gallery to enable tenants to easily provision capacity. The Service Provider is then able to scope or limit access to these gallery items on a plan-by-plan basis. This enables the Service Provider to tailor the set of applications they make available to different groups or even individual tenants. Figure 14 (see below) shows how the tenant can select a Virtual Machine Role from the gallery. In this example, the service provider has provided six gallery items in the plan to which this tenant is subscribed.
Figure 14: Creating a Virtual Machine Role from the gallery.
Virtual Machine Roles have also been modeled and designed with Windows Azure consistency in mind. One of the new capabilities in Virtual Machine Roles is the ability to scale a virtualized application. Just as with the modern website service, tenants can now easily scale their virtualized applications.
In order to enable this scenario, a Virtual Machine Role separates the application from the image – this allows the same base image to be used for multiple applications. Next, settings unique to the Virtual Machine Role define the scaling constraints for the application along with the initial number of instances to be deployed. The default values for these settings can then be defined when the gallery item is authored. Figure 15 (see below) shows how the tenant can configure the scaling settings.
Figure 15: Specifying scaling settings for the virtual machine.
In Figure 15 you’ll also notice a drop-down list for VM Size. This contains a set of Service Provider defined values for Extra-Small, Small, Medium, Large and Extra-Large. This theme of offering simplified options to the tenant consumer is in line with the same type of experience in Azure.
In addition to the scalability settings, there is a set of application-specific settings. These are uniquely defined for each gallery item. In Figure 16’s example (see below), the gallery item was authored to collect a few IIS-specific settings. The key concept to highlight here is that Virtual Machine settings are separated from the application settings. This is not merely a UX separation, it is a fundamental distinction in the Virtual Machine Role service model and package definition.
Figure 16: Specifying application settings.
After the application is deployed, the tenant will be able to manage the logical Virtual Machine Role, scale the application to handle additional load, and manage the individual instances running as part of this application. This provide a high degree of flexibility in managing the VM role independent of the application settings.
An essential feature of the Virtual Machine Role is versioning. Versioning enables the Service Provider to publish updated versions of their gallery items over time. Subscribed customers are notified when a new version is available in the portal. This allows users the option to upgrade to the new version during the appropriate servicing window. In Figure 17 (see below), the dashboard for the Virtual Machine Role indicates that there is an update available. This reminder is present in the portal because the tenant initially deployed version 1.0.0.0, and version 1.1.0.0 has been published by the provider. Tenants can choose to deploy this update during the appropriate servicing windows for the application.
Figure 17: Update notification.
As we said earlier, a unique feature of Virtual Machine Roles is the ability to scale the application. Figure 18 (see below) shows how easily tenants can scale out new Virtual Machine instances for their applications: They simply move a slider in the portal. This is a consistent experience for scalable services running on the platform throughout the Self-Service Portal.
Figure 18: Scaling applications.
Another new scenario we have enabled as a part of Windows Azure Pack is a way to console connect to an inaccessible VM instance running on the fabric. This inaccessibility can have a variety of causes (the VM may have a misconfigured network, or remote desktop disabled, or perhaps the machine is having trouble during the installation or boot up sequence, etc.), and in each case the end result is critically important to address: The VM is inaccessible to the remote desktop connection client. This means that if the machine is running in a Service Provider’s data center, the customer has no way to access the machine to troubleshoot the problem.
Console Connect is a new feature delivered in Windows Azure Pack made possible by new capabilities in Windows Server 2012 R2 and System Center 2012 R2. Console Connect is plumbed through the entire stack including the Remote Desktop Connection client. When the tenant opens the dashboard screen for a VM, there is a "Connect” command in the command bar. By default, the Connect command will simply launch the Remote Desktop Connection client to RDP to the virtual machine. If the Service Provider has enabled Console Connect, the customer will have an additional “Console” option on the “Connect” command. When the customer selects this, it launches a new RDP session on a secure connection to the new Console Connect service provided by the operator. Figure 19 (see below) illustrates this experience.
Figure 19: Console Connect.
In Figure 20 (see below) you can see that we have established a remote connection to a virtual machine that is waiting at the Windows Server 2012 installation screen. We are actually able to remote connect to a machine that does not even have the operating system installed!
Figure 20: Console Connect to a Windows Server VM.
As discussed in last week’s blog post discussing how we’ve enabled Open Source software, a key tenant of the R2 wave is ensuring open source software runs equally well on Windows Server. This is demonstrated in Figure 21 with the ability to create a remote desktop connection to a Linux machine.
Figure 21: Console Connect to a Linux VM.
The integration of Windows Azure Pack, System Center 2012 R2, and Windows Server 2012 R2 delivers both a Self-Service Portal experience and new services that enable Service Providers to deliver a tenant administrative experience that will delight customers.
The R2 wave has built on the innovation in the 2012 releases to provide Service Providers a rich IaaS solution. We have brought to market innovation into the infrastructure itself to ensure that the network, compute and storage, and infrastructure are not only low-cost but easy to operate through rich integration with System Center. On top of this, there is the delightful user experience for not only the IaaS administrators, but also the tenant administrators consuming IaaS.
Next week, we start a two week look at what 2012 R2 can do for Hybrid IT.
To learn more about the topics covered in this post, check out the following articles. Also, don’t forget to start your evaluation of the 2012 R2 previews today!
Part 5 of a 9-part series. Today’s post has two sections; the second half is here.
One of the industry metrics that I follow closely every quarter is the sale of x86 servers around the world. I look at the trends around the purchase of the server hardware (what is the growth rate, where are they being purchased, etc.) by country, by segment, and at least 10 other benchmarks that apply to me and the Windows Server and System Center business. Sorting through this data is key, so believe me when I say that I am an Excel expert – and I love the new self-service BI that has been built by the Excel and SQL teams!
Looking at where the servers have been purchased, where the highest levels of growth are occurring, and now with organizations looking at how they move to a Service Provider model – it is obvious that we are seeing the rise of the Service Provider and public cloud.
The move to a service provider model is one of the most significant shifts we are seeing in data centers around the world. This is occurring as two key developments are afoot: First, many organizations are making moves to use Service Provider and public cloud capacity; and Second, there is an internal shift within organizations towards a model wherein they are provide Infrastructure-as-a-Service (IaaS) for their internal business units. This is all headed towards a model where enterprises have detailed reporting on the usage of that infrastructure, if not bill-back for the actual usage.
Back during the planning phase of 2012 R2, we carefully considered where to focus our investments for this release wave, and we chose to concentrate our efforts on enabling Service Providers to build out a highly-available, highly-scalable IaaS infrastructure on cost-effective hardware. With the innovations we have driven in storage, networking, and compute, we believe Service Providers can now build-out an IaaS platform that enables them to deliver VMs at 50% of the cost of competitors. I repeat: 50%. The bulk of the savings comes from our storage innovations and the low costs of our licenses.
You might be asking yourself, “Why is Microsoft focused on Service Providers? Isn’t that what all of us really are? Isn’t the job that most of us have in building out infrastructure (whether that be in an internal private cloud, a service provider cloud, or a public cloud) all about delivering the infrastructure our ‘customers’ need to host the applications and services that run the organization? And shouldn’t we be doing it in a way that offers the required SLA while relentlessly diving down the associated costs?” Even if you haven’t recently asked yourself a question this long, when you read today’s post (and tomorrow’s) think of yourself and your organization, and consider whether or not you think we are all Service Providers.
At the core of our investments in 2012 R2 is the belief that customers are going to be using multiple clouds, and they want those clouds to be consistent.
Consistency across clouds is key to enabling the flexibility and frictionless movement of applications across these clouds, and, if this consistency exists, applications can be developed once and then hosted in any clouds. This means consistency for the developer. If clouds are consistent with the same management and operations tools easily used to operate these applications, that means consistency for the IT Pro.
It really all comes down to the friction-free movement of applications and VMs across clouds. Microsoft is very unique in this regard; we are the only cloud vendor investing and innovating in public, private and hosted clouds – with a promise of consistency (and no lock-in!) across all of them.
We are taking what we learn from our innovations in Windows Azure and delivering them through Windows Server, System Center and the Windows Azure Pack for you to use in your data center. This enables us to do rapid innovation in the public cloud, battle harden the innovations, and then deliver them to you to deploy. This is one of the ways in which we have been able to quicken our cadence and deliver the kind of value you see in these R2 releases. You’ll be able to see a number of areas where we are driving consistency across clouds in today’s post.
And speaking of today’s post – this IaaS topic will be published in two parts, with the second half appearing tomorrow morning.
In this first half of our two-part overview of the 2012 R2’s IaaS capabilities, Erin Chapple, a Partner Group Program Manager in the Windows Server & System Center team, examines the amazing infrastructure innovations delivered by Windows Server 2012 R2, System Center 2012 R2, and the new features in the Windows Azure Pack.
As always in this series, check out the “Next Steps” at the bottom of this post for links to wide range of engineering content with deep, technical overviews of the concepts examined in this post. Also, if you haven’t started your own evaluation of the 2012 R2 previews, visit the TechNet Evaluation Center and take a test drive today!
It’s hard to believe that only one year ago we were celebrating the release of Windows Server 2012, and System Center 2012 SP1 was in beta.
Both of these releases delivered the most innovation to date in a single server operating system release, and the question on everyone’s mind was, “What’s next?” There was another question, too: “What could we deliver in a year?”
The good news is that, as engineers, we love a challenge, and taking software projects like Windows Server and System Center and delivering a compelling set of end-to-end scenarios in just a year was a challenge we welcomed!
One of the things we noticed when we examined Windows Server 2012 and System Center 2012 SP1 was that, despite the great innovation, customers still had to stitch multiple components together in order to build an Infrastructure-as-a-Service (IaaS) offering.
IaaS was a critical area where we could make a difference for our customers, and it was an area of increasing importance to our enterprise customers and their need to internally operate in more of a Service Provider capacity (this capability includes the need to consolidate data center resources and offer virtual machine (VM) rental to departments in order to see not only an infrastructure spend benefit, but also a process to cut operating costs). Additionally, the number of Service Providers moving from more traditional dedicated hosting models into the IaaS market is growing. This focus on delivering IaaS solutions for Service Providers (and enterprises that want to act as Service Providers for their users) became a rallying point across the Windows Server, System Center, and Windows Azure Pack (WAP) teams. As David Cross explained in his intro to this R2 series, we shifted our focus from features and components to delivering smooth end-to-end scenarios.
To deliver this value in the Windows Server 2012 R2 release, we focused on two important aspects of delivering IaaS:
To be clear, by “tenant” we are referring to the customer of the Service Provider who is acquiring and using resources offered by the Service Provider.
The remainder of this blog post will analyze the specific scenarios we outlined in our planning process, and we’ll also detail how we worked through the integration and technical challenges to deliver these scenarios for the Windows Server 2012 R2 preview software that is currently available. As mentioned above, these investments are targeted specifically at Service Providers and enterprises that want to act as Service Providers for their users. For simplicity, we will refer to this collection of customers simply as Service Providers.
In the cloud-first world, infrastructure plays an increasingly important role in the modern data center. Innovation at the infrastructure layer enables Service Providers to deliver higher levels of performance and availability as well as richer services – while remaining cost effective. Additionally, the focus on separating the infrastructure from the workloads and applications makes it easier to adopt new innovation and stay up to date. With a little upfront planning and design, the infrastructure can move forward at a faster pace than the workloads that run on it.
When we think about the infrastructure, we focus on three components:
With Windows Server 2012 and System Center Virtual Machine Manager (SCVMM), we enabled new foundational scenarios from Continuous Availability (with in-box NIC Teaming) to building blocks for Software Defined Networking (with the extensible Hyper-V Virtual Switch and Hyper-V Network Virtualization [HNV]) to IP Address Management (IPAM). We designed these scenarios from end-to-end, to provide the infrastructure needed to enable them in Windows Server, and the management and automation of these scenarios through SCVMM.
If you’d like to see this for yourself, I recommend taking a couple minutes and downloading the System Center 2012 R2 and Windows Server 2012 R2 previews.
As we looked to build on this foundation, we spent time with customers (from enterprises, to Service Providers, to Microsoft’s own data center groups such as Azure and Bing) hoping to understand how they operate and what they needed to run their networks. As a result of this research, three main pieces of feedback emerged:
These learnings helped crystallize our core customer vision for networking in the R2 release:
Enable existing networks to become a pooled, automated resource with flexibility to move workloads across any cloud and to enable such agility alongside high performance and easy diagnosability.
This vision defined the scenario areas we targeted: Cloud Scale Performance and Diagnosability, Comprehensive Software-Defined-Networking (SDN) Solution, and Network Infrastructure Enhancements for the cloud. As shown in Figure 1 (see below), there are many features that further enhance our end-to-end capabilities in these areas.
Figure 1: Networking enhancements in the R2 wave
Driving up performance enables customers to do more with their existing investments, instead of having to rely on new or specialized hardware to meet their scale needs. This in turn drives down capital expenses. With the Windows Server 2012 R2 release, Virtual RSS (vRSS) enables traffic intensive workloads to scale up to optimally utilize high speed NICs (10G) even from a VM. In addition, in order to improve diagnostics and thereby drive down operational expenses, we worked closely with Azure to design the ability to remotely collect packet captures and ETW traces both as a live feed (with Microsoft Message Analyzer) and as an .etl file. This enables diagnostics without a need to log on to the console of the target computer (Host or VM). The resulting model helps get at the needed data faster to isolate root cause issues, and it enables more flexibility for remotely administered computers.
Today’s Virtual LAN (VLAN) networks tend to be inflexible and they require high touch whenever network changes are required. In multi-tenanted service provider environments, this setup reduces agility across a number of scenarios, such as onboarding new tenants, live migrating VMs, and applying new policies. With the 2012 R2 release, we advanced both of the key building blocks – Hyper-V Virtual Switch and Hyper-V Network Virtualization. We have also filled a key in-box gap around providing Windows Server gateways to enable customers to flexibly span their workloads across multiple clouds. We also enabled basic management of physical switches (with standards-based schemas) using Windows PowerShell and SCVMM, thereby making it possible to automate certain diagnostics across the hosts and into the physical network. Finally, deployment and management is entirely automated via SCVMM, with tenant self-service via Windows Azure Pack.
Enhanced Network Virtualization
In Windows Server 2012, we shipped the first release of our network virtualization solution. It was highly scalable, standards-based and enabled scenarios such as:
The components that comprise our network virtualization solution include four important elements:
Though we have great partners (including F5, Huawei, and Iron Networks) shipping HNV gateways, we heard feedback from customers that we needed to ship an in-box gateway that reduced the amount of computing resources needed to perform these gateway capabilities, met the high availability needs for virtual networks, and was fully manageable by SCVMM. Addressing this customer feedback was one of our major focuses for R2.
In Windows Server, we added a multi-tenant in-box gateway that performs Site-to-Site (VPN), Network Address Translation (NAT), and Forwarding functions. The multiple-tenant portion of the gateway enables Service Providers to reduce the amount of compute resources dedicated to providing the HNV gateway capabilities needed for network virtualization. To do this, we enabled multiple VM networks to use the same gateway VM.
Though this might sound like a pretty straight forward feature, it required some big changes across the networking stack, the virtual switch, and network virtualization. First, to support a multi-tenant gateway, we added the ability to support multiple routing domains within the same VM. This required changes in the Windows networking stack to compartmentalize the different virtual networks providing tenant traffic isolation per compartment and allowing overlapping IP addresses.
To ensure the availability for virtual networks, the key was to make the gateway highly available (because it provides the bridge out of the virtual network). Then we leveraged the existing host and guest clustering capabilities in Windows – this allowed us to build a highly available in-box gateway. In addition to supporting clustering in the gateway, we needed to expand the type of packets that were supported in the VM network because this is required for the clustering heartbeat functionality. In Windows 2012 we only supported IP traffic and we needed to expand this to include ARPs and Duplication Address Detection (DAD) packets. Making these changes also had the fortunate side effect of enabling new scenarios in the network virtualization platform, e.g. bring your own DHCP and host and guest cluster allowing for highly available VMs in a VM network.
Just adding an in-box gateway in Windows Server wasn’t enough. It was important for this to really light up when managed by SCVMM, which brings the automation and management to virtual networks and the gateways. Building on the existing capabilities of being able to create, modify, and remove VM networks, SCVMM makes it seamless to deploy the in-box gateway by shipping a service template. SCVMM also automatically configures the gateway and ensures that even as VMs move around in the data center, the communication in the VM network remains unbroken. This is the type of integration between compute and networking and Windows and SCVMM that provides the flexibility and automation needed by our customers.
Integrated physical and virtual switch management
Another common IT need which we now support is integrated physical and virtual switch management. We heard from customers that the management experience between virtual and physical is disjointed and causes operational issues such as when VLAN configuration is out of sync between the physical switch and the virtual switch. To solve this issue, and others like it, we delivered two things in R2.
First, we introduced new standards-based switch management. This allows admins to manage their switches (physical and virtual) through PowerShell using an industry standard management schema. The standards-based schema allows admins to set and configure ports on the switch, set and configure VLANs, and much more. In addition, with Widows Server 2012 R2, we are extending the Windows Logo program to include network switches that implement this industry standard. Though using Windows PowerShell to manage switch and having a logo certification provide strong customer value on t own, we again want to light up scenarios across Windows and SCVMM to provide the greatest value. Second, we have added the ability in SCVMM to manage network switches. To go back to the example above where VLANs get out of sync between physical and virtual switches, we use the power of this standardized switch management interface plus SCVMM to monitor the VLAN configuration across both the virtual and physical switches, notify the admin if something in the VLAN configuration is out of sync and allow the administrator to automatically fix the misconfiguration.
Networks of virtualized data centers and cloud environments are required to be automated for agility, dynamically scalable and dispensable, and be able to enforce control upon its administration. With the R2 release, IP Address Management (IPAM) implements several major enhancements, including streamlined and unified IP address space management of physical and virtual networks, as well as tighter integration with SCVMM. IPAM in Windows Server 2012 R2 offers granular and customizable role-based access control and delegated administration across multiple data centers. IPAM also provides a single console for monitoring and the management of addressing and naming services across data centers – especially supporting administration of advanced capabilities around continuous availability of IP addressing (with DHCP failover), DHCP Policies, filters, and so on. With the need for integration with other systems and automation, IPAM offers exhaustive Windows PowerShell support and is highly scalable with SQL Server support as its backend. There have also been improvements made to the addressing and naming services, such as DNS supporting per-zone query metrics to support managed services. DHCP includes improvements to DHCP policies with support of FQDN based policy to streamline DNS registration.
For more details about the specific networking feature improvements in R2, see the Transforming Your Data Center – Networking blog post.
In summary, the 2012 R2 wave builds on the networking foundation introduced in Windows Server 2012 and System Center SP1 with improvements in performance and diagnosability, as well as by providing rich SDN scenarios.
Windows Server 2012 was a significant release for Microsoft in terms of delivering a best-in-class cloud computing platform. We’ve continued to invest in developing the industry’s best compute virtualization platform for cloud computing with Windows Server and System Center 2012 R2 – and this includes investments across private, public and hosted cloud environments.
After the release of Windows Server 2012, we received feedback from customers supporting the strides made in enabling higher levels of virtual machine mobility, new data center and deployment architectures, and how Hyper-V Replica could be used to span the bridge between a private cloud and a hosted cloud environment.
As we talked with customers about how they were using Windows Server 2012 and the opportunities they saw ahead of them, it became very clear that the R2 wave of products needed to:
Remove the barriers between private, public and hosted cloud environments and deliver a single compute platform that would work for all environments.
This vision defined the scenario areas we targeted: Increasing Uptime and Performance in Hosted Cloud Environments, Improving Operations in Private Cloud Environments, and Delivering Next Generation Experiences. As shown in Figure 2 (see below), there are many features that further enhance our end-to-end capabilities in these scenario areas.
Figure 2: Compute enhancements in the R2 wave
For hosted cloud environments, we have spent a lot of time working on increasing virtual machine uptime and providing advanced storage capabilities for virtual machines. With Windows Server 2012 R2, it is now possible to configure quality of service controls on virtual machines storage, while the virtual machine is running. This means that virtual machines with high storage throughput requirements (for example, a heavy database workload) will not use excessive amounts of storage throughput and slow down other virtual machines in the environment. We also added the ability to create clustered virtual machines using highly available virtual hard disks that are stored on a scale out file server. This allows a Service Provider to offer enterprise-grade, clustered, virtualized workloads, without the need to invest in separate storage hardware, and without exposing their storage infrastructure to the end user. Finally, it is now possible for Service Providers to both expand and reduce the size of virtual hard disks on running virtual machines.
All these investments mean that Service Providers can deliver new capabilities and higher levels of service to their customers without the need to invest in new hardware.
The amazing level of virtual machine mobility delivered in Windows Server 2012 was one of the most popular additions in that release. Customers have particularly enjoyed being able to deploy updates to clustered environments with zero downtime and minimal administrative oversight. That functionality is thanks to the Cluster Aware Updating functionality provided in this release.
We continued to make significant investments in live migration in Windows Server 2012 R2. For example, live migration with compression provides 2 to 3 times faster live migration with your existing infrastructure, while live migration over SMB Direct provides even faster live migration (with lower CPU utilization) for RDMA-enabled network infrastructures. The end result of this is that maintenance operations and patch deployment in your private cloud environment can be performed in significantly less time. This means that an update that may have taken half a day to deploy to your private cloud environment can now be deployed over a lunch break.
To see this feature in action, watch Jeff Woolsey’s demo here.
We have also added the ability the make exported copies of running virtual machines for easy troubleshooting and diagnosis of issues inside a virtualized environment.
We are also continuing to push the envelope on what is possible with virtual machines by introducing second generation virtual machines – a new type of virtual machine that is UEFI-based and dramatically reduces the use of emulated “legacy” devices.
For administrators who interact directly with virtual machines on a daily basis, we have worked closely with the Remote Desktop team to deliver an entirely new level of integration for virtual machines that provides full Remote Desktop capabilities (such as enhanced graphics, copy and paste, and sound) when interacting with a virtual machine – without requiring network connectivity.
In summary, the R2 wave substantively expands the level of reliability and high performance you can expect from Windows Server, and increases the confidence you can have building and operating your cloud infrastructure when using Windows Server and System Center.
A large number of enhancements around storage came with the 2012 wave of products, including Cluster Shared Volumes (CSV), the new virtual hard disk VHDX format, the Resilient File System (ReFS), Storage Management Initiative Specification (SMI-S) support, iSCSI Software Target, Server Message Block (SMB) 3.0, Hyper-V Virtual Fibre Channel, and Chkdsk improvements to name a few. Most notably in the 2012 release was a set of strategic shifts introduced to reduce storage costs. By using software-defined storage with Spaces as a low-cost storage alternative, and file-based storage for application workloads such as Hyper-V, customers could reduce their storage costs significantly.
In planning for R2, one of the clear messages we heard from customers who were responsible for building private cloud and IaaS solutions is that storage represents one of the largest areas of spend. This presents a serious and ongoing challenge for tight enterprise IT budgets, and, for the 2012 R2 release, we attacked this problem head-on. We, approached storage with a goal to reduce both capital expenditure costs and operational expenditure costs through software-defined storage.
This learning helped crystallize our core customer vision for storage in the R2 release:
Reduce $/GB and $/IOPS/GB for private cloud and IaaS storage while delivering high performance and continuous availability
This vision defined the scenario areas we targeted:
As shown in Figure 3 (see below), there are many features that further enhance our end-to-end capabilities in these scenario areas.
Figure 3: Windows Server 2012 R2 storage enhancements
Building on the strategic shifts introduced in Windows Server 2012, our vision for IaaS cloud storage focusses on disaggregated compute and storage. In this model, scale-out of the Hyper-V compute hosts is achieved with a low-cost storage fabric using SMB3 file-based storage, where virtual machines access VHD/VHDX files over low-cost, Ethernet-based networks, as illustrated in Figure 4:
Figure 4: Disaggregated compute and storage
This model enables the Hyper-V compute layer to scale out without incurring the costs of expensive Fiber Channel host bus adapters (HBA) in each server. SMB3 serves as the high performance protocol, enabling VHD’s to be accessed from a Scale-out File Server. IaaS deployments can achieve high performance scale out on low-cost Ethernet or Infiniband fabrics, as well as in a converged networking model. A few of the enhancements in Windows Server 2012 R2 are:
The combination of these features delivers a high performance and scalable file-based storage infrastructure at low cost.
In Windows Server 2012 R2, we continue the journey with Spaces delivering high-performance, resilient storage on inexpensive hardware through the power of software. Windows Server 2012 R2 delivers many of the high-end features you expect from expensive storage arrays, such as:
The combination of these features will deliver the opportunity for significantly reduced CapEx costs for a scalable, resilient, cloud storage platform.
Another focus area was to reduce the operational costs associated with deploying and managing a Windows Server 2012 R2 cloud storage infrastructure. SCVMM now provides end-to-end management of the entire IaaS storage infrastructure. Some of the most notable enhancements are:
This simplified and consolidated end-to-end management experience is part of our commitment to the core value of delivering a complete experience that combines the power of Windows Server 2012 R2 with the manageability of Systems Center 2012 R2 enable reduced operational expenses (OpEx).
When you combine all the pieces of software-defined storage as discussed above, Windows Server 2012 R2 will further reduce $/GB and $/IOPS/GB for private cloud and IaaS storage while delivering traditional high-end value such as continuous availability.
This in-depth look at the IaaS capabilities in the 2012 R2 wave of products is pretty jaw dropping, and, to get even deeper into these amazing innovations I recommend taking a look at the content featured in the “Next Steps” section below.
The work being done with R2’s infrastructure innovations are new benchmark for the scalable, flexible, powerful cloud computing era.
We’ll get into this even more in the second half of our IaaS overview tomorrow.
To learn more about the topics covered in this post, check out the following articles covering Networking and Storage. Also, don’t forget to start your evaluation of the 2012 R2 previews today!
Part 4 of a 9-part series.
The Common Engineering Criteria (CEC) that I’ve talked about in previous posts have been instrumental in ensuring that the teams contributing to 2012 R2 met rigorous and specific criteria in areas like Manageability, Virtualization Readiness, Data Center & Enterprise Readiness, Reliability, Hardware Support, and Interoperability. The original idea behind the creation of the CEC back in the early 2000’s was to drive consistency across all of the Microsoft Server workloads and applications – simplifying the experience of using multiple Microsoft solutions together, as well as driving down the total cost of owning and operating our solutions.
The benefit of the CEC is that it ensures all our enterprise solutions come ready to be deployed, operated, and managed “out-of-the-box” in the Microsoft Clouds. To get really specific, each and every workload team across Microsoft (e.g., all the roles in Windows Server, the Office Workloads, etc.) deliver the knowledge that instructs System Center how to operate that workload in the Microsoft Clouds. If you ever ask the question, “Which Cloud is best suited for running Microsoft workloads?” the answer should be clear – the Microsoft Clouds!
One of the best things about this 2012 R2 wave of products was how its unified planning and engineering milestones allowed teams from across Microsoft to work on different aspects of the release in parallel. This process brought together a genuinely amazing collection of solutions and value while leveraging the expertise of each team and each individual. This is a really important point to think about. If I were to ask who has the most knowledge and expertise in how Windows Server should be deployed and operated, the answer is simple: The Windows Server team at Microsoft. Therefore, whenever you deploy Windows Server in the Microsoft Clouds you have an enormous built in advantage: The expertise of the entire Windows Server team. This far-reaching expertise is expressed through the Common Engineering Criteria in the form of any future updates that will become available – every one of those updates will be built, tested, and approved by the Windows Server team, aka the world’s foremost Windows Server deployment experts.
There are a lot of great surprises in these new R2 releases – things that are going to make a big impact in a majority of IT departments around the world. Over the next four weeks, the 2012 R2 series will cover the 2nd pillar of this release: Transform the Datacenter. In these four posts (starting today) we’ll cover many of the investments we have made that better enable IT pros to transform their datacenter via a move to a cloud-computing model.
This discussion will outline the ambitious scale of the functionality and capability within the 2012 R2 products. As with any conversation about the cloud, however, there are key elements to consider as you read. Particularly, I believe it’s important in all these discussions – whether online or in person – to remember that cloud computing is a computing model, not a location. All too often when someone hears the term “cloud computing” they automatically think of a public cloud environment. Another important point to consider is that cloud computing is much more than just virtualization – it is something that involves change: Change in the tools you use (automation and management), change in processes, and a change in how your entire organization uses and consumes its IT infrastructure.
Microsoft is extremely unique in this perspective, and it is leading the industry with its investments to deliver consistency across private, hosted and public clouds. Over the course of these next four posts, we will cover our innovations in the infrastructure (storage, network, compute), in both on-premise and hybrid scenarios, support for open source, cloud service provider & tenant experience, and much, much more.
As I noted above, it simply makes logical sense that running the Microsoft workloads in the Microsoft Clouds will deliver the best overall solution. But what about Linux? And how well does Microsoft virtualize and manage non-Windows platforms, in particular Linux? Today we’ll address these exact questions.
Our vision regarding other operating platforms is simple: Microsoft is committed to being your cloud partner. This means end-to-end support that is versatile, flexible, and interoperable for any industry, in any environment, with any guest OS. This vision ensures we remain realistic – we know that users are going to build applications on open source operating systems, so we have built a powerful set of tools for hosting and managing them.
A great deal of the responsibility to deliver the capabilities that enable the Microsoft Clouds (private, hosted, Azure) to effectively host Linux and the associated open source applications falls heavily on the shoulders of the Windows Server and System Center team. In today’s post Erin Chapple, a Partner Group Program Manager in the Windows Server & System Center team, will detail how building the R2 wave with an open source environment in mind has led to a suite of products that are more adaptable and more powerful than ever.
As always in this series, check out the “Next Steps” at the bottom of this post for links to a variety of engineering content with hyper-technical overviews of the concepts examined in this post.
During the planning process for a release we look at the assumptions we’re making, challenge them, and then look ahead to demarcate how our industry will be shaped by changing market conditions. While open source software has been present in the datacenter for several years, as we look at what a modern datacenter entails it is increasingly clear that enabling open source software is a key tenet in our cloud offerings.
Not only do enterprises run key workloads based on Linux and UNIX, but, in this cloud-first world, many applications leverage open source components. To provide our customers with one cloud infrastructure, one set of system management tools, and one set of paradigms to transform their datacenter with the cloud, we knew that we needed to ensure that Windows is the best platform to run Linux workloads as well as open source components. With Windows Server 2012 R2, System Center 2012 R2, and in the public-cloud with Windows Azure, IT pros now have this assurance.
Whether it’s on-premise management of your datacenter, running in the Microsoft public cloud, or a hybrid of the two, you can now run and manage Windows and Microsoft applications, as well as run and manage Linux, UNIX, and open source applications – all with a consistent experience.
Consider this scenario: You are an infrastructure administrator for a large hosting company, or for an IT department within an enterprise organization. Your customers very likely want to host and manage complex applications that require services running on multiple Windows and Linux guest virtual machines. Today, you may be using separate hosting and management tools for the Windows and Linux environments. This means separate hypervisors, separate management tools, and separate user interfaces for your customers. You may even have separate technical staff for the two environments! This bifurcation dramatically increases complexity and costs – both for you and for your customers.
Windows Server 2012 R2 and System Center 2012 R2 offer consolidation on a single infrastructure to run and manage Windows and Linux guest virtual machines. With a single infrastructure, operations and processes are simplified considerably. For example, you no longer have to deal with the complexity of handling Windows one way and Linux another, and complex applications that have Windows and Linux components are no longer a special case that must span two infrastructures. Now you can spend more time providing great service to your customers and less time dealing with operating system differences. Your customers also get the advantage of a single, unified, view of their applications and workloads, with consistent and unified reporting, resource usage, and billing.
At the core of enabling this single infrastructure is the ability to run Linux on Hyper-V. With the release of Windows Server 2012 Hyper-V, and enhanced by the updates in the 2012 R2 version, Hyper-V is at the top of its game in running Windows guests. We’re delivering this with engineering investments in Hyper-V, of course, but also in the Linux operating system.
You read that correctly – some of the work we are doing at Microsoft involves working directly with the Linux community and contributing the technology that really enables Hyper-V and Windows to be the best cloud for Linux.
Here’s how we’ve done it: Microsoft developers have built the drivers for Linux that we call the Linux Integration Services, or “LIS.” Synthetic drivers for network and disk provide performance that nearly equals the performance of bare hardware. Other drivers provide housekeeping for time synchronization, shutdown, and heartbeat. Directly in Hyper-V, we have built features to enable live backups for Linux guests, and we have exhaustively tested to ensure that Hyper-V features, like live migration (including the super performance improvements in 2012 R2), work for Linux guests just like they do for Windows guests. In total, we worked across the board to ensure Linux is at its best on Hyper-V.
To ensure compliance, Microsoft had done this LIS development as a member of the Linux community. The drivers are reviewed by the community and checked into the main Linux kernel source code base. Linux distribution vendors then pull the drivers from the main Linux kernel and incorporate them into specific distributions. LIS is currently a built-in part of these distributions:
Updates to LIS for the 2012 R2 release tackle several key issues needed to bring Linux to the same baseline as Windows when running on Hyper-V:
These enhancements and others are described in more technical detail on the Hyper-V Virtualization blog.
Going forward, Microsoft will continue the cycle of enhancing the Linux Integration Services to match new Hyper-V capabilities, contributing the enhancements to the Linux kernel through the community process, and then working with distribution vendors to incorporate the latest LIS into new Linux distribution versions. As a result, IT pros can be confident in Microsoft’s commitment to offering a unified infrastructure and to helping you and your customers to reduce cost and complexity. Also keep in mind that all the work we do in Windows and Hyper-V is applicable and consistent across the Microsoft clouds: Private, hosted and Windows Azure. Because Windows Server and Hyper-V are the foundation of Windows Azure, all our investments directly apply.
Consider another scenario: As an infrastructure administrator for a large hosting company, or for an IT department within an enterprise organization, you have to manage those Windows and Linux guest VMs and associated applications that are running on Hyper-V. Most likely, you also have physical computers running Windows, Linux, or UNIX that haven’t been virtualized. As with the core execution layer provided by Hyper-V, you’d like to have consistent management of these different operating systems, and consistent management of the different “hardware” – whether it be virtual or physical. You don’t want different consoles, different tools, and different processes/procedures for the different operating systems and hardware. Most importantly, you don’t want your customers to see these differences. Management represents the second major investment area of OSS enablement.
To support a single infrastructure that runs and manages Windows and Linux, we bet on standard-based management using CIM (Common Information Model) and WS-Man (Web Services for Management). At the heart of this bet is the work we are driving with the industry on the Data Center Abstraction Layer (DAL) to provide a common management abstraction for all the resources of a data center to make it simple and easy to adopt and deploy cloud computing. The DAL is not specific to one operating system; it benefits Linux cloud computing efforts every bit as much as Windows. The DAL uses the existing DMTF standards-based management stack to manage all the resources of a data center. To support the DAL, Microsoft has contributed Open Management Infrastructure (OMI) as an open source implementation of these standards along with a set of providers for managing Linux. We built OMI from the ground up to support Linux natively and provide the rich functionality, performance and scale traits needed in a Linux CIMOM.
For our customers, we wanted to make managing Linux and any CIM-based system simple to automate via PowerShell. We introduced the PowerShell CIM cmdlets in Windows Server 2012 which enable IT pros to manage CIM based systems natively from Windows.
To learn more about these cmdlets in PowerShell you can type:
Get-Command –Module CimCmlets
System Center builds upon and enhances the core management investments in the platform to deliver consistent management across Windows, Linux, and UNIX. We started on this journey several years ago, and System Center Operations Manager was the first major area of investment, offering Linux/UNIX monitoring more than 4 years ago. Since then, Microsoft has broadened our Linux/UNIX coverage to include Configuration Manager, Virtual Machine Manager, and now, in the System Center 2012 R2 release, Data Protection Manager.
In Operations Manager, nearly all of the functionality available for Windows servers is also available for Linux and UNIX servers: Monitor OS health and performance, monitor log files, monitor line-of-business application, monitor databases and web servers, and audit security relevant events. Going up the software stack, Microsoft supplies management packs for Java application servers, both open source (Tomcat, JBoss) and proprietary (IBM WebSphere and Oracle WebLogic). Partners also supply management packs for other open source software such as MySQL and the Apache HTTP Server. This functionality appears in a single console, with Windows, Linux, and UNIX computers side-by-side so that you get one view of your workloads and applications, as seen here:
Similarly, core Configuration Manager functionality is available for Linux and UNIX, including hardware inventory, inventory of installed applications, the ability to distribute and install software packages, and reporting on all of these areas. ConfigMgr can install open source and proprietary software packages to Linux and UNIX in almost any format. ConfigMgr also includes anti-virus agents for all of the Linux distributions managed by Microsoft. Again, Windows, Linux, and UNIX computers appear side-by-side, with one set of concepts and paradigms as highlighted below. This means you spend less time flipping between environments and more time solving real problems.
Virtual Machine Manager is the fabric controller, and it is at the heart of a private cloud. It manages Windows guests and Linux guests running on Hyper-V, and it can personalize Linux OS instances during deployment, so that multiple Linux guests can be deployed from a single template (with each guest automatically getting a unique identity, IP address, etc. – just like for sysprep’ed Windows images). For those complex applications with Windows and Linux components, Linux can participate in VMM service templates to deploy a multi-tier service. The service template can be all Linux, or it can be mixture of Linux and Windows tiers. Almost of all the rest of the great capabilities of VMM are agnostic to the guest OS, so live migration and placement, IP address management, network virtualization, and storage management easily work for Linux. With this level of consistency, you will rarely need to worry about whether a virtual machine is running Windows or Linux.
In System Center 2012 R2, Data Protection Manager adds the ability to backup Linux guest VMs running on Hyper-V, again giving you consistency across Windows and Linux. The Linux guest VMs can continue running live – there is no need to pause or suspend them – and DPM will get a file system consistent snapshot of the VM to backup. “File system consistent” means that the Linux file system buffers are automatically flushed via integration with the Linux Integration Services for Hyper-V. This kind of consistency is analogous to application consistency via VSS writers that are available for Windows VMs.
System Center 2012 R2 gives you a single, consistent, systems management infrastructure for private clouds with Windows and Linux, or in your datacenters physical or virtualized infrastructure running Windows, Linux, and UNIX. Applications with Windows and Linux components can be deployed and managed from a single interface, giving you reduced complexity and reduced costs.
In any IT environment, open source is more than just the operating system. You may be using open source components in your applications, whether you are a vendor offering Software-as-a-Service (SaaS) from the cloud, or an enterprise running open source components in your datacenter.
To provide customers with increased flexibility for running open source-based applications on Windows, Microsoft simplified the process for building, deploying and updating services that are built on Windows. This was achieved through the development of a set of tools called “CoApp” (Common Open source Application Publishing Platform), which is a package management system for Windows that is akin to the Advanced Packaging Tool (APT) on Linux.
Using CoApp, developers on Windows can easily manage the dependencies between components that make up an open source application. Developers will notice that many of the core dependencies, such as zlib and OpenSSL, are already built to run on Windows and are available immediately in the NuGet repository. Through NuGet, CoApp-built native packages can be included in Visual Studio projects in exactly the same manner as managed-code packages, making it very easy for a developer to download core libraries and create open source applications on Windows. Those of you with a developer orientation can get more details on CoApp in these videos: GoingNative - Inside NuGet for C++ and Building Native Libraries for NuGet with CoApp’s PowerShell Tools.
We’ve also done great work collaborating with the open source community to ensure specific OSS apps run on, and are optimized for, Windows. For example, consider PHP, which is a foundational component of many content management and publishing applications. Microsoft works in the PHP community to ensure that versions are available which run natively on Windows, right alongside the versions that run on Linux or UNIX. The newest version, PHP 5.5.0, has just been released for Windows on the same day that it was released for other operating systems. The Windows version includes significant performance improvements that deliver functionality that will surprise many.
In addition to all of these improvements, the Azure gallery now includes a broad range of Open Source applications thereby providing customers with ready access to install and run commonly used Open Source software on Azure.
Microsoft’s ongoing commitment to supporting Open Source Software has been highlighted recently in two important partnerships: First, our customers can now run Oracle software on Windows Server Hyper-V and in Windows Azure encompassing Java, Oracle Database, and Oracle WebLogic Server. This can all happen on Windows Server Hyper-V or Windows Azure in a fully supported mode. Second, a new Java development kit (JDK) will be available through a partnership with Azul Systems. This will enable customers to deploy Java applications on Windows Azure using open source Java – on Windows and Linux.
Enabling open source software is a key part of our promise to support the efforts of our customers as they continue to transform their datacenters with the cloud. This enablement is a key tenet of the scenarios we design and build our products to handle. The features and functions that enable open source software are an integral part of our products, and each element of these products are built and tested by our core engineering teams. These efforts are fully supported by Microsoft.
As you might expect for the “Enable OSS” tenet of this 2012 R2 release, key parts of our open source enablement are themselves open source. For example, the Linux Integration Services are open source in the Linux kernel, and Microsoft releases the source code for most of the agents that System Center uses on Linux and UNIX to provide management capabilities. OMI and CoApp are also an open source projects, and, of course, PHP on Windows is part of the PHP open source project.
With this release Microsoft is clearly the choice for datacenter infrastructure if you require the ability to run and manage open source software alongside Windows.
This post has covered one key trend in the datacenter (the need for one infrastructure and management solution for Windows and Linux), and next week we’ll examine another critical element: The need for enterprises to act more like service providers.
A core requirement for this trend is for enterprises to deploy and operate an Infrastructure-as-a-Service. In next week’s post, we’ll examine the infrastructure and experience improvements we’ve made across Windows and System Center to enable this scenario for customers.