• Recap: World Hosting Days @ Europa Park, Germany

    Today’s keynote at World Hosting Days (held at about midnight Redmond time at Europa Park in southern Germany) capped a great two days of meetings with the Hosters and tech leaders who are leading the industry with their cloud-based solutions.

    Regular readers of this blog might be thinking to themselves why a VP of engineering at Microsoft is at a Hosting conference?  The reason is really simple and really important:  Right now we are seeing the largest growth in purchases of new x86 servers coming from Hosters. These hosting partners are seeing tremendous demand for their services/capacity as organizations around the world move into a Hybrid Cloud model.

    One of the topics my keynote touched on in detail was the principle of the “Virtuous Cycle” of the cloud – something I’ve written about before here.

    This virtuous cycle is something that only Microsoft is delivering in a complete, holistic way.  Our approach to this cycle is something that Hosters understand better than anyone – they understand the challenges of deploying, maintaining, and growing a shared cloud platform + services, as well as the constant need to innovate and drive efficiencies.  This cycle, and the Hybrid Cloud approach that enables it, is incredibly exciting and it’s something that Microsoft alone enables – AWS can’t support this because it lacks a viable on-prem solution, and VMware falls short because it doesn’t have a public cloud option.  This means that Microsoft is uniquely positioned as the premiere partner for Hosters all over the world. Microsoft is currently the only organization delivering everything that we learn from our own hyper-scale experiences and providing it for the rest of the industry to use.

    This virtuous cycle, based on our Cloud OS approach and built on Azure, delivers three critical things:

    • Hyper Scale which delivers lower costs and greater agility.
    • Enterprise Grade capabilities which translates to greater confidence.
    • Hybrid integration and consistency which means you avoid lock-in and get more flexibility.

    Microsoft is committed to providing the technology and tools that enable Hosters to expand their service offerings, grow their business, and better serve their growing customer base.

    Looking at the market, there are a number of growth drivers that we are watching, and at Microsoft we have begun to structure key sections of our engineering, sales, and marketing teams to develop products and initiatives which proactively address the requirements and opportunities that support the Hoster market.

    In particular, there are three specific areas where we currently see the strongest revenue growth:

    1. Infrastructure
      Our latest products are resonating well with Hosters, and we see an increasing number of partners endorsing Microsoft technologies to develop the cloud fabric (virtualization of resources, automation of their provisioning). 
    2. Hosted DB or DBaaS
      Based on SQL, this brings the largest contribution in revenue to our business growth – and still accelerating.  To put the value in perspective, DBaaS solutions serve two scenarios:
      • It extends the lifecycle of existing LOB applications which are moving from on-prem to hosted IaaS, and it offers a lower cost of ownership. Hosters are proving the ability to capture those application and support the migration to the datacenter.
      • It supports the development of a new class of modern app design for cloud.  This is seen in the evolving business model of ISV’s who are moving from software to SaaS.
    3. Hosted desktop
      With an increasing number/variety of devices which IT teams need to support, compounded by the increasing mobility requirement of professionals, end-users are increasingly interested in Hosted Desktop solutions.  These solutions are expected to maintain security, identity, and compliance across company applications that are centrally hosted and accessible via a Remote Desktop Service. Microsoft Hosters are currently managing more than 2.3 million hosted desktop users – a business that is growing rapidly.

    One final note from today’s event:  I want to reemphasize the commitments I made on-stage to the World Hosting Days audience:

    First, Microsoft’s commitment to innovation.  Hosters are going to see this innovation in two key ways:  Cloud-first design and optimizing for Hosters.

    Second, Microsoft is committed to openness with the Hoster community.  This openness can already be spotted in a lot of important areas:

    • On January 28th we announced that Microsoft joined the Open Compute Project (OCP) and we’re contributing the Microsoft Cloud Server specification to the community.
    • We continue to contribute additional Linux distribution support.  Over the last year, no other organization has contributed more content to the Linux community than Microsoft.
    • We will continue adding to the Window Azure Pack with gallery items, languages, and frameworks.
    • We will continue our active participation with Open Daylight and the Open Network Summit.  To see this in action, visit the Microsoft booth at the summit – you’ll be surprised.
    • You can read more about this in an earlier post entitled, Enabling Open Source Software.

    Thanks again to the organizers of World Hosting Days for the invite!  This event and this section of the IT industry are driving innovation and using technology in ways that everyone can learn from.

  • Announcing the Preview of Disaster Recovery to Azure using Azure Site Recovery

    If you’re an enterprise that has viewed previous cloud-based DR solutions with skepticism – brace yourselves for the details of this announcement. This is awesome.

    When we sat down to plan how to build a cloud-based DR solution focused on mission-critical workloads, we had one primary priority: Make DR available to everyone, available everywhere, and easy to use. Arguably, that’s three primary priorities – but the results are undeniable.

    We began with the current version of Hyper-V Recovery Manager that’s been available since January 2014 (as noted in this post). This version of HRM enabled automated protection, asynchronous ongoing replication, and orchestrated/accurate/consistent recovery of virtualized workloads between private clouds across enterprise sites with minimal downtime. Starting today, HRM has a new name: Microsoft Azure Site Recovery.

    But this is a lot more than just a name change announcement. After an intense and carefully focused development, I am really excited to announce the preview of a new Disaster Recovery to Azure functionality that’s now available as part of Azure Site Recovery (ASR).

    ASR will also now enable you to protect, replicate, and failover Virtual Machines directly to Microsoft Azure, thereby increasing the resilience of their business-critical apps. The efficiency and availability that come from this resiliency has a direct impact on the bottom line – but that isn’t the only cost savings ASR provides. Using ASR also removes the need to invest in an on-prem standby datacenter.

    This new capability preview in ASR is a big step towards our promise of “No Workload Left Behind.” It’s a lot like the similarly named government program, minus the acerbic partisan animosity (but still a lot of standardized testing).

    Here’s what ASR looks like in action:

    Brad

    Both the existing DR solution for on-prem private clouds and the new DR to Azure capabilities are built atop a pretty amazing foundation: Windows Server Hyper-V Replica, System Center Virtual Machine Manager, and Microsoft Azure – and both are delivered via the Microsoft Azure Management Portal.

    In addition to enabling Microsoft Azure as a DR site in multiple geographies, this preview also includes an impressive list of features for enabling virtual machine replication to Azure:

    • At-Scale Configuration
      You can configure the protection and replication of VM settings in a private cloud and configure and connect on-prem networks with Azure Networks. Those VM’s are then only replicated to customer-owned and managed geo-redundant Azure Storage.
    • Variable Recovery Point Objective (RPO)
      This feature provides support for near-synchronous data replication with RPOs as low as 30 seconds. You can also retain consistent snapshots at desired frequency for a 24-hour window.
    • Data Encryption
      VM Virtual Hard Disks can be encrypted at rest using a secure, customer-managed encryption key that ensures best-in-class security and privacy for your application data when it is replicating to Azure. This encryption key is known only to the customer and it is needed for the failover of VM’s to Azure. Simply put: All of this service’s traffic within Azure is encrypted.
    • Self-Service Disaster Recovery
      With ASR you get full support for DR drills via test failover, planned failover with a zero-data loss, unplanned failover, and failback.
    • One-Click Orchestration
      ASR also provides easy-to-create, customizable Recovery Plans to ensure one-click failovers and failbacks that are always accurate, consistent, and help you achieve your Recovery Time Objective (RTO) goals.
    • Audit and Compliance Reporting with Reliable Recovery
      DR testing and drills can be performed without any impact to production workloads. This means you get risk-free, high-confidence testing that meets your compliance objectives. You can run these non-disruptive test failovers whenever you like, as often as you like. Also, with the ability to generate reports for every activity performed using the service, you can meet all your audit requirements.

    ASR can also help you quickly and easily migrate on-premise Virtual Machines to Azure or spin-off additional development and testing environments. Whether you are protecting a few dozen virtual machines, or hundreds, ASR offers some huge benefits.

    In particular, ASR is really simple (it’s easy to configure and automate your DR across private clouds or directly to Azure), it’s cost effective (protecting workloads in Azure means big savings in CAPEX and OPEX spent building secondary datacenters), it’s intuitive for your team to use (the self-service model builds on top of existing products, e.g. System Center, Windows Server, Azure), it’s extensible (the cloud-based service architecture allows for faster development and easy access to new features), and it offers a consistent user experience (no matter if you’re working in a private cloud, a service provider, or a public cloud – the UX and the functionality is the same).

    For more information on ASR, check out the recording of the ASR session at TechEd 2014 where we discussed the preview. You can also visit the Azure Site Recovery forum on MSDN for additional information and to engage with other customers.

    Once you’re ready to see what ASR can do for you, you can check out pricing information, sign up for a free trial, or learn more about the product specs.

  • A Real Cloud OS for the Enterprise Cloud Era

    Over the last few years I’ve had, in one meeting or another, countless discussions with my teams and colleagues about the size, shape, and impact of the Cloud.  If you’ve been to a Microsoft event like TechEd or MMS, you’ve heard us talk about the Cloud OS and the promises we feel the Cloud OS delivers. 

    What can get lost in so much of the discussion, however, is a simple question:  What is the Cloud OS?

    This calls to mind an important point made by Server & Tools President, Satya Nadella, at last year’s TechEd Conference.  He noted:

    “At the most basic level, any operating system has two “jobs”: it needs to manage the underlying hardware, and it needs to provide a platform for applications. The fundamental role of an operating system has not changed, but the scale at which servers are deployed and the type of applications now available or in development are changing massively.”

    This is a critical concept we focus on as we build the tools for private/public/hybrid clouds, and lead the charge to create a new cloud operating system for a new cloud-based era of business.

    The Cloud OS has three key pieces that address the operating system “jobs” that Satya noted:  Windows Server 2012 (a cloud-optimized server OS that can support workloads of any size), System Center 2012 (for ops, management, and governance of the datacenter), and Windows Azure (for application development, deployment, and administration).

    We deeply understand that customers do not want to feel locked in to any specific cloud, and this is why our promise of consistency across Clouds is such a critical factor for everyone to understand.  Windows Server is the foundation of what we deliver for building Private Clouds, Clouds hosted by Service Providers around the world, and it is also the foundation of Windows Azure. 

    When you deploy the Microsoft Cloud OS, you have the flexibility to choose which cloud is best for your business and for the specific application.  There’s no lock-in; you make the choice about what’s best for you.  This promise makes us truly unique in the industry. 

    Another thing that sets us apart is the fact that we are the only organization in the world that is taking all the knowledge gained from operating our own massive public cloud, and then packing those learnings for use in enterprise and service provider datacenters.  To further support our community, in January we released, as a free download, Windows Azure Services for Windows Server.  This download is software that we initially developed for use in Windows Azure for scenarios like high-density web hosting.  This is a great example of how we are able to prove and exhaustively test capabilities like this in Azure at a massive scale and then release it for use on Windows Server.  Our on-premises products continually get better from the innovative work we do in Azure.  I think that is pretty cool!

    This is a simply amazing advancement in how we think about and execute IT Management.

    Microsoft’s ability to provide the Cloud OS is the result of having genuinely battle-tested this software throughout our 16 major data centers around the globe – where we manage hundreds of petabytes and hundreds of distinct workloads.  To really grasp what a major leap forward this is, it helps to stop thinking about IT deployments in terms of the individual servers, and instead think about an OS that runs a datacenter.  This operating system is what provides the intelligence, automation, and orchestration necessary to manage workloads that are increasingly large, complex, and diverse.

    As you work to modernize your own data center, keep in mind the “four promises” of the Cloud OS:

    1. You can transform your datacenter by thinking above and beyond servers and nodes, and instead focus on managing datacenters and clouds in a comprehensive, scalable, and elastic environment.
    2. This scalability and elasticity enables modern apps to be rapidly developed, functional across devices, and manageable in multiple environments.  This means a dynamic, new approach to application lifecycle management, app availability, and IT productivity.
    3. Unlock insights on any data thanks to lower storage costs, ever-increasing volumes of available data, cloud-based processing power, and the simple and easy BI solutions currently available from Microsoft.
    4. Enable your workforce to be productive anywhere on any device by empowering people-centric IT – and manage these devices and their apps from a single pane of glass.

    These are big promises – and they are promises I believe Microsoft is uniquely able to deliver.  In future posts, I’ll continue to talk a lot more about these promises and how we are doing to deliver on them for you.

  • Success with Hybrid Cloud: Saving Time & Money with a Hybrid Cloud

    Best-Practices_thumb23

    Over the course of this Success with Hybrid Cloud series we’ve covered the structure behind a hybrid environment and the best practices to plan, build, deploy, and operate one. For any organization, the Hybrid Cloud effectively combines an enterprise’s on-premises infrastructure with that of cloud service provider infrastructure and the public cloud to create the extended compute, storage, and network infrastructure for the enterprise.

    A Hybrid Cloud allows an enterprise to compliment on-premises capacity with cloud infrastructure services on an as-needed basis. Because of the elasticity provided by cloud services, the hybrid cloud model offers a high degree of flexibility to enterprises who need to add capacity yet maintain certain resources on-premises for compliance, licensing, or other purposes. The popularity of a Hybrid environment as the go-to IaaS strategy for enterprises is already high, and I expect that popularity to continue growing.

    It is really important to understand the capabilities you are going to use as you stretch your datacenter infrastructure to use external cloud capacity. Simply put: All clouds are not created equal. With this in mind, I think it is valuable to talk about the work we have done in Windows Server and System Center to enable Hybrid Clouds, and then compare this to what others have done in the market. I’ll also put some financial numbers alongside these technical details to demonstrate why we think using Microsoft for your Hybrid Cloud solution provides something really powerful at a really economical price.

    In this post, I’ll examine the benefits of a Hybrid environment for Networking, Storage, and Compute.

    Hybrid Networking

    Hybrid networking refers to the capabilities that extend an enterprise’s on-premises network seamlessly to the cloud. Hybrid networking enables enterprises to easily move their VMs (and workloads) from the on-premises network to the cloud and back while maintaining IP addresses and other networking policies. With hybrid networking, an enterprise administrator can treat their composite network – spanning enterprise-cloud boundaries – as one extended network for placing compute and storage resources.

    Hybrid networking in Windows Server and Systems Center 2012 R2 was described in detail in an earlier post, and a key capability is the multitenant site-to-site (S2S) VPN gateway that can support S2S connections from multiple customers, thus eliminating the need to deploy separate gateways for each customer. The gateway also supports VPN and Internet access (see graphic below).

    clip_image002

    In the 2012 R2 release, a single pair of multitenant S2S gateways (in 1+1 failover configuration) can support up to 200 S2S connections with aggregate throughput of up to 1.5 Gbps, and each such connection could potentially belong to a distinct customer. This is a significant architectural detail to understand: Whether you are a service provider, or an enterprise that needs to offers secure and isolated networking to your tenants, the architecture of your multi-tenant gateway will have a huge impact on your costs.

    The following table shows the cost savings for the service provider when the multitenant gateway supports just 15 tenant connections (with average throughput of up to 100 Mbps each).

    • The second column here represents traditional S2S connectivity deployed by service providers, whereby a single pair of S2S gateway VMs per tenant is deployed.
    • The third column depicts the computations using a pair of WS 2012 R2 multitenant gateways.
    • The last column describes the case wherein the customer deploys a virtualized function in Amazon Web Services (AWS) – implemented using the Vyatta virtual router/firewall – to realize S2S connectivity for constructing his virtual private cloud.
     

    Cost of traditional S2S GW solution – 2 VMs per tenant

    Cost of 2012 R2 multitenant S2S GW solution – 2 VMs

    AWS Vyatta solution

    Cost of  S2S GW VM per hour

    6 cents

    6 cents

    36 cents

    Cost per year (0.06*24*365)

    $525

    $525

    $3150

    Number of VMs required to support one connection each from 15 customers

    30

    2

    15

    Cost of VMs for  15 customers per year

    $15750

    $1050

    $47250

    The hourly VM cost figures in the second and third columns are representational, based on Azure service rates. Other service provider tariffs may be substituted in these columns, but the cost advantage of the 2012 R2 multitenant GW is clear. This cost advantage holds as long as the S2S throughput requirement of an individual customer is significantly less than the aggregate throughput capability of the S2S gateway (allowing effective multiplexing), which is true for the typical business customer. The cost savings to the service provider will translate to cheaper connectivity for customers in building their hybrid cloud.

    Another area for cost savings for the service provider is the IP Address Management (IPAM) capability that ships with Windows Server 2012 R2.

    With the R2 release, IPAM implements several major enhancements:

    • Unified IP address space management of physical and virtual networks through tight integration with SCVMM.
    • Granular and customizable role-based access control and delegated administration across multiple datacenters.
    • Single console monitoring and management of DHCP and DNS services across datacenters – in particular, the administration of DHCP failover, DHCP policies, and filters.
    • Complete PowerShell support for integration with automation workflows; and support for SQL Server as IPAM data store.

    The use of IPAM in a hoster datacenter is depicted in the previous graphic.

    IPAM automates tasks that otherwise require expensive and inherently unreliable manual effort, including management of physical and tenant address spaces, sequential update and management of DHCP and DNS services, and implementation of provisioning and monitoring workflows. In a virtualized cloud environment, IPAM is key to ensuring the agility promised by virtualization – new VM instances can be quickly created and deployed without IP address assignment, DHCP and DNS updates becoming bottlenecks. IPAM thus saves time, provides visibility into the network state, and saves cost for the operator.

    To answer your next question, ROI analysis and various estimates on the cost savings of IPAM does exist, including projections that IPAM eliminates hundreds of person-hours of operations time per year, and saves thousands of dollars in operations cost.

    Finally, the 2012 R2 hybrid networking solution is managed through Systems Center Virtual Machine Manager, and service providers can set up a Windows Azure Pack (WAP)-based self-service portal for customers to create and monitor S2S connections.

    These Network features save time for business customers by creating new capacity. Rather than taking days or weeks to set up, a customer can self-provision the infrastructure in minutes in the service provider cloud and connect it back to his premises facilities seamlessly.

    Hybrid Cloud Storage

    Data is one of the most important assets a business has, but the exponential growth of this data has made it increasingly difficult to manage. Organizations all over the world have faced this simple fact: Storage, the technology for holding and protecting data, must evolve in order to keep up with data growth and the access requirements mandated by legal and regulatory compliance. It’s clear that greater efficiencies and tighter automation are going to be needed moving forward.

    Hybrid Cloud Storage is a breakthrough technology that integrates on-premises storage systems with cloud storage services. Our recently released Windows Azure Backup Service provides a way for our customers to automate their nightly backup processes using Windows Azure Storage as the location for storing that backup data. This means that data no longer has to occupy on-premises storage and it frees storage administrators from the time-consuming and error-prone tasks of running and managing backups. If there had been Hybrid Cloud Storage decades ago, customers wouldn’t have had to manage tapes and offsite storage all these years!

    But Hybrid Cloud Storage can be much more than backup automation – it can also provide uninterrupted, continuous capacity expansion for on-premises systems and applications without consuming additional on-premises storage or data center resources. I recently wrote about the anniversary of our StorSimple acquisition and I identified several of the major successes we’ve had helping customers deal with the high cost of storage.

    With storage, there is a universal use case that almost every company struggles with: Storing inactive data with a much lower total cost of ownership where it can be easily retrieved. Companies have many reasons to keep historical copies of data for long periods of time, but they don’t want to use expensive on-premises SAN capacity and the administrative overhead required to do it. Hybrid Cloud Storage with StorSimple automatically and transparently offloads inactive data to Windows Azure Storage where it is safely and securely stored – and can be retrieved quickly.

    Consider a couple real-world examples:

    AGC AusGroup is a manufacturing and construction services company in Australia and Southeast Asia that recently invested about $500,000 USD in two data center SANs. The problem they didn’t anticipate was that both SANs quickly filled up with inactive data and they were left needing expensive capacity upgrades. In addition to these unforeseen costs, their archiving software proved to be expensive and time-consuming to manage. Fortunately for them, this common predicament fell right into StorSimple’s sweet spot: Helping organizations avoid the cost of acquiring additional, expensive on-premises storage capacity. In AGC’s case, they were able to immediately postpone a $125,000 expenditure and they were able to save additional money by displacing their archiving software and reducing the capacity needed for the SharePoint implementation.

    A similar scenario took place at Mulvanny G2 Architecture, with a slightly different twist. Having stored paper documents with an offsite records company for decades, they had hundreds of millions of historical documents – they wanted these records digitized and placed in indices. What they didn’t want to do was fill up their new high performance SAN storage with documents that had minimal performance requirements. Again, our StorSimple solution made the most sense, both for its ease of integration and the low cost of ownership. MulvannyG2 avoided spending $140,000 USD on yet another SAN and is on their way to eliminating the $50,000 annual cost of managing their documents offsite. They are also looking forward to improving their DR abilities using their StorSimple system and Windows Azure Storage.

    Considering how much conversation there is in the industry around security, I want to emphasize the world-class structure we have in place to protect your data:  The data is encrypted before it leaves your datacenters, it is then encrypted again in transit, and it is encrypted again at rest in Azure. At every stage, you hold the keys – the keys never come to Microsoft – so you can take advantage of these incredible storage/backup/DR scenarios knowing your data is safe and secure.

    These two examples show how customers get immediate budget relief by implementing the StorSimple and Windows Azure Hybrid Cloud Storage solution. But the financial benefits of Hybrid Cloud Storage extend far beyond their immediate impact; it is a solution that continues to generate user benefits throughout its lifecycle by offloading storage capacity to the cloud and automating the time consuming tasks of backup, arching, and DR preparation.

    Hybrid Compute

    The private cloud and public cloud each have their sweet spots and constraints in terms of optimizing for Compute.  By its very nature, when you implement a Hybrid Cloud approach you extend your degrees of freedom around these to allow you to achieve cost optimizations that are simply not achievable with an all-private or all-public strategy. 

    For example, most public cloud providers charge for network egress into and out of their network.  For applications with high egress requirements (especially to zone 2 regions, for example), implementing these in your private cloud can provide excellent cost savings. For applications with high storage requirements the cost of public cloud storage on Azure can be under 4 cents per GB per month – that is pretty tough to beat for redundant storage.

    To achieve these cost savings it’s necessary to do an effective analysis of your workloads and understand their behaviors across the following measures:

    • Compute Size (# of procs and cores and RAM)
    • Compute Utilization (peak loads and annual utilization)
    • Storage requirements
    • Network egress by zone

    With this analysis done, you can begin to understand the nature of your Compute needs – and with this understood, you can begin to address them accordingly. There are several important pieces of technology that address these needs, for example… (how this tech addresses, why it is lower cost)

    Here are some examples of where innovative companies have improved efficiencies in this regard:

    • United Airlines & Sika AG:  They leveraged their private cloud infrastructure across multiple datacenters to lower the cost of their DR solutions.
    • Trek Bicycle Corp provides a good example of the benefits of combining public cloud (Azure) IaaS to complement their existing datacenter solutions and reduce costs.

    With this analysis done, you can begin to understand the nature of your Compute needs – and, with this understood, you can develop a hybrid strategy to address them accordingly. 

    The first step is to model the usage of the applications and services you provide back to the business.  For example, applications that are just used at month-end or quarter-end are good candidates for moving to the public cloud to capitalize on the bursting nature of these apps. 

    From a tooling perspective, System Center’s AppController provides IT with a single pane of glass to view VMs in your private cloud and Azure public cloud. 

    We have a growing number of hosting service providers in our Cloud OS Network (COSN) who are implementing the Windows Azure Pack (WAP) to provide an Azure consistent experience for you in their hosted clouds.  This approach allows you to consider compute cost savings based on your region’s service providers – as well as Azure and your own internal costs for compute. 

    Finally, there are value-added hybrid services that provide cost savings opportunities for innovative IT departments.  These include leveraging Azure backup, Hyper-V Replica, and Hyper-V Recovery Manager as vehicles for offloading expensive storage and disaster recovery solutions. 

    Summary

    Whether you are a cloud service provider or an enterprise considering your Hybrid Cloud options, Microsoft offers a powerful solution can time and money – and lots of it.

    These savings are realized across countless networking, storage, and compute scenarios, and the technologies described here are only going to become more efficient as we continue to refine and update these solutions.

    To dig deeper into this topic, I recommend a few of the links I used as footnotes back in my “What’s new in 2012 R2” post about Hybrid Networking:

    • Networking for Cloud Services in Windows Server 2012 R2
      In this video from TechEd Europe, session presenters cover the advances in core network infrastructure services (DNS, DHCP, and IPAM) in Windows Server 2012 R2, as well as how to implement these enhancements in private, public, and hybrid cloud environments.
    • Deep Dive on Hyper-V network virtualization in Windows Server 2012 R2
      In this video from TechEd North America, session presenters discuss how Hyper-V Network Virtualization is a key investment to enable workload mobility and SDN in the cloud.  This session includes a deep dive into how Hyper-V Network Virtualization works.  Also featured:  How Windows Server 2012 R2 makes it easier than ever to enablecustomers to move their workloads, and for hosters to improve flexibility/automation/control across any cloud.
    • How to design and configure networking in Microsoft VMM and Hyper-V
      In this session from TechEd North America, the presenters discuss the comprehensive set of options and configuration settings for networking provided by Hyper-V VMM.
    • Project MAT for Shift
      From the popular Building Clouds blog, this post is an update on the very well received VM Migration toolkit. And check out their very entertaining video, The Migrator.
    • What’s new in Windows Server 2012 R2 Networking
      In this session from TechEd North America, you can learn more about how Microsoft has taken what it has learned from its global network of datacenters and applied it to the development of Windows Server 2012 R2.  This session covers networking advancements, network infrastructure enhancements in IPAM, Secure Remote Access to better meet the needs of virtualized environments, and how we have advanced Software Defined Networking with in-box support for hybrid environments.
    • What's New in IPAM in Windows Server 2012 R2
      This technical overview covers role-based access control, virtual address space management, enhanced DHCP server management, external database support, upgrade/migration support, and enhanced Windows PowerShell support.
    • Hyper-V Extensible Switch Enhancements in Windows Server 2012 R2
      In this blog post, we detail the enhancements we have made to the Hyper-V extensible switch in Windows Server 2012 R2.
    • What’s new in System Center 2012 R2, Virtual Machine Manager
      This session from TechEd North America, presented by Vijay Tewari, discusses the new innovations in virtualization, storage, and networking.  Vijay discusses the new capabilities of System Center 2012 R2 Virtual Machine Manager that enable new scenarios for customers – as well as enhancements to existing scenarios.  Also discussed:  How to use SDN to bring agility into cloud-based environments, and storage enhancements that enable customers to easily deploy enterprise-grade workloads.
    • Test Lab Guide: Windows Server 2012 R2 Hyper-V Network Virtualization with System Center 2012 R2 VMM
      Technical guidance with instructions for how to create the Windows Server 2012 R2 Network Virtualization with System Center 2012 R2 Virtual Machine Manager (VMM) – using computers running Windows Server 2012 R2.
    • What's New in VMM in System Center 2012 R2
      This detailed overview covers enhancements to Networking, VM management, Storage, Services, and Infrastructure.
    • What’s New in Hyper-V Network Virtualization in R2
      In this blog post, we go into details on the new capabilities of Hyper-V Network Virtualization in Windows Server 2012 R2.
    • Network Virtualization Technical Details
      Cloud-based datacenters can provide many benefits such as improved scalability and better resource utilization. To realize these potential benefits requires a technology that fundamentally addresses the issues of multi-tenant scalability in a dynamic environment. Hyper-V Network Virtualization was designed to address these issues and also improve the operational efficiency of the datacenter.
    • Overview of Windows Azure Pack for Windows Server
      This in-depth white paper evaluates how Windows Azure pack has been built on a foundation of Windows Server and System Center to deliver an enterprise class, cost-effective solution for multi-tenant cloud infrastructures services. Service Providers and enterprise customers can build customizable solutions using industry standard hardware, broad application platform support and open technologies.
    • Evaluation Guide for System Center 2012 R2 and the Windows Azure Pack
      Documentation reviewing how to set up and test an evaluation environment for tenant and administrator portals.
    • Microsoft BGP Router configuration automation
      Featured here is a PowerShell script that provides an easy-to-use automated interface for the configuration of BGP Router on Routing and Remote Access Server (both in Multi-Tenant and Single-Tenant modes) on a Microsoft Windows Server 2012 R2 system.
  • What’s New in 2012 R2: Making Device Users Productive and Protecting Corporate Information

    Blog-Header_Graphic_PCIT
    Part 2 of a 9-part series

    The modern workforce isn’t just better connected and more mobile than ever before, it’s also more discerning (and demanding) about the hardware and software used on the job. While company leaders around the world are celebrating the increased productivity and accessibility of their workforce, the exponential increase in devices and platforms that the workforce wants to use can stretch a company’s infrastructure (and IT department!) to its limit.

    If your IT team is grappling with the impact and sheer magnitude of this trend, let me reiterate a fact I’ve noted several times before on this blog: The “Bring Your Own Device” (BYOD) trend is here to stay.

    Building products that address this need is a major facet of the first design pillar I noted last week: People-centric IT (PCIT).

    In today’s post (and in each one that follows in this series), this overview of the architecture and critical components of the PCIT pillar will be followed by a “Next Steps” section at the bottom. The “Next Steps” will include a list of new posts (each one written specifically for that day’s topic) developed by our Windows Server & System Center engineers. Every week, these engineering blogs will provide deep technical detail on the various components discussed in this main post. Today, these blogs will systematically examine and discuss the technology used to power our PCIT solution.

    Our goal is to walk you through the architecture, examples and critical components to each pillar. The “Next Steps” section provides an easy way to review all of the supporting analysis from our engineering teams about the inner workings of the technology at work within Windows Server & System Center.

    Getting in depth on PCIT is a major topic of countless discussions I’ve had with business leaders all over the world.  In each of these meetings my feedback has been pretty straightforward: The number and diversity of devices in your company is only going to increase; this is a trend you really want to embrace.

    This doesn’t mean that there aren’t organizations doing their best to fight back the tide, but I do believe that the companies who embrace the opportunity to keep their employees connected and productive will see a justifiable return on their investment (ROI).

    Making that ROI materialize, however, will depend on how an organization embraces and enables the BYOD trend. In particular, every IT team will need to carefully and individually identify the right balance of access to corporate resources from these devices and then ensure that the necessary security and compliance protocols are enforced. Getting this right can have a big impact on employee morale, and a hassle-free IT environment can even impact retention.

    We have made a big investment in PCIT because we want to enable and empower IT pros to provide their workforce with strong and stable support for the devices they want to work from, while also providing these IT teams with the means to take the appropriate amount of control of those devices – both corporate-supplied and user-owned. The PCIT solution detailed below enables IT Professionals to set access policies to corporate applications and data based on three incredibly important criteria:

    1. The identity of the user
    2. The user’s specific device
    3. The network the user is working from

    The solutions we’ll detail in these PCIT posts enable IT teams to tightly control devices based on the type of work they do and the sensitivity of the corporate data they access. For example, very strict security protocols, usage controls, and data access requirements are necessary for point-of-sale devices, devices on a manufacturing line, devices in a heavily regulated pharmaceutical lab, bank-teller PCs, etc. The need for these kinds of devices to be connected and managed will continue to be critically important. On the other end of the spectrum you have devices such as personal phones and tablets that IT simply cannot control in the same way (Here is a test: Try telling your end-users that you’re going to disable the cameras on their personal phones and see what their reaction is!).

    What’s required here is a single management solution that enables specific features where control is necessary and appropriate, and that also provides what I call “governance,” or light control when less administration is necessary. This means a single pane of glass for managing PCs and devices. Far too often I meet with companies that have two separate solutions running side-by-side – one for every PC, and the second to manage devices. Not only is this more expensive and more complex, it creates two disjointed experiences for end users and a big headache for the IT pros responsible for managing.

    In today’s post, Paul Mayfield, the Partner Program Manager for the System Center Configuration Manager/Windows Intune team, discusses how everything that Microsoft has built with this solution is focused on creating the capability for IT teams to use the same System Center Configuration Manager that they already have in place managing their PCs and now extend this management power to devices. This means double the management capabilities from within the same familiar console. This philosophy can be extended even further by using Windows Intune to manage devices where they live – i.e. cloud-based management for cloud-based devices. Cloud-based management is especially important for user-owned devices that need regular updates.

    This is an incredible solution, and the benefit and ease of use for you, the consumer, is monumental.

    * * *

     

    As you may have seen at the recent TechEd events, we have added several new capabilities across Windows, Windows Server, System Center and Windows Intune this year. These new capabilities are intended to enable our customers to embrace the Consumerization of IT and enable a Bring Your Own Device (BYOD) scenario in their organizations around the world.

    We refer to these new capabilities as “People-centric IT.”

    People-centric IT (PCIT) is about helping people to work on the devices they choose. We’re providing users access to their apps and data on any of their devices in any location. The challenge this presents to IT teams is considerable: As soon as users are working on a device that IT does not manage (or even have any knowledge of), it becomes very difficult to retain control of sensitive corporate information and to be able to respond to situations such as the device being sold, lost, or stolen.

    In particular, the challenges faced by IT teams responsible for a modern corporate infrastructure come from four key areas:

    1.5a

    With the 2012 R2 wave of releases (e.g. Windows Server, System Center Configuration Manager, and the next release of Windows Intune), we are helping our customers answer these challenges. Engineers across each of those teams have jointly planned and executed their scenarios across a common set of engineering milestones, and we have delivered these scenarios across three primary areas that drove our priorities and investments in engineering:

    1. Empowering users. This means allowing users to work on the devices of their choice by providing consistent access to corporate apps and data from those devices. We’ll support a broad set of devices in the R2 wave of releases this year, ranging from corporate laptops and desktops to personal phones, laptops, and tablets. We’ll support Windows and iOS devices across all of our PCIT features. Many features will be supported on Android too. For example, enrolling devices for management will be supported across Windows, iOS and Android. WorkPlace Join and Work Folders will be initially supported on Windows and iOS.
    2. Unifying your environment. With the System Center Configuration Manager console we deliver comprehensive application and device management from a single plane of glass. We’ve also worked to integrate scenarios across on-premises infrastructure (using System Center Configuration Manager, Windows Server, and Active Directory), as well as cloud-based services, (using Windows Intune and Windows Azure). Also, it’s worth noting that, on top of all this, we have already integrated management and malware protection.
    3. Helping protect your data. By controlling access to information based on a user, his/her specific device, and the location of that device, IT teams can better control and safeguard corporate assets. These tools can also remove or disable data on devices when they are no longer being used, as well as provide rich auditing and reporting.

    2.5

    Now let’s look at each of these scenarios, and their benefits, in detail.

    Empower users

    Simple registration and enrollment for users adopting Bring Your Own Device (BYOD) programs

    We’re providing new ways for users to opt-into receiving IT services on their devices. Users can perform a Workplace Join to register their devices in Active Directory and they can enroll their devices for management in Configuration Manager and Windows Intune.

    You can think of Workplace Join as being a light form of Domain Join but for personal mobile devices.  Registered devices are recorded in the Active Directory and they are issued credentials.  However, they don’t support Group Policy or scripting. Instead, you can manage the device by enrolling it for mobile device management.

    We’re making it simple and easy for users to register their device with the Active Directory. They will want to register their devices in order to get access to corporate resources and in order to enable single-sign-on (SSO).

    Based on the user’s name and password, we’ll look up the tenant (in the case of Azure Active Directory) or look up the local Active Directory Federation Services (AD FS) registration server (in the case of on-prem registration). Then we trigger the device to enroll for a certificate from its registration service.

    As part of that Workplace Join, we’ve created a user@device record in the Active Directory. In this way, we’re enabling your existing AD infrastructure to be extended to accommodate mobile devices. This allows us to provide the IT Pro with an inventory of devices and their users, and to audit the access that will be subsequently granted to those users on those devices. The certificate issued to the device includes both the identity of that device and the identity of the authenticated user. Access to resources published via our Web Application Proxy (see below), or to any other resource that relies on AD FS for authentication, will rely on this certificate for authentication.

    One thing worth noting: The act of registering the device to Active Directory does not allow IT to control the device in any manner -- that’s is covered by enrollment. Workplace Join is only used to govern access to corporate resources and to enable SSO.

    In addition to registering devices with Active Directory, we’re also making it easy for users to enroll their devices into the Windows Intune management service. Users will want to do this in order to get their devices provisioned, and in order to install corporate apps on their devices.  To do this, the user simply enters his or her user name and password to enroll the device, and the service will then look up the user’s tenant and trigger Mobile Device Management (MDM) enrollment.

    MDM enrollment varies by device. The basics of MDM enrollment include issuing a certificate to authenticate the device to the management system, installing management profiles, and registering a device with an appropriate notification service. As part of the enrollment process, the user will be prompted to consent to allow some administrative control of the device to the IT department. Once enrollment is complete, the management system triggers device provisioning. The device will contact the Windows Intune service and download settings, WiFi Profiles, VPN profiles, side loading keys, apps and more. The device may also enroll for additional certificates that can be used for network authentication or for other security purposes.

    A user can decide to register the device with Active Directory or enroll the device in Windows Intune – or both. We recommend both because the full suite of PCIT services and experiences are only available to devices that are both registered and enrolled. This is the best experience for the user and it provides the best protection for the company.

    3

    Access to company resources consistently across devices

    The company portal (see sample screenshot below) provides users with a consistent interface from which they can gain access to applications (both internal applications and links to public stores), manage their own devices to perform tasks such as remote wipe, and also gain access to their data with integration to Work Folders.

    4

    Automatically connect to internal resources when needed

    As part of enrolling for management, users can have their devices provisioned with certificates, WiFi profiles, VPN profiles, and DirectAccess configuration. The VPN profiles can be associated with DNS names or specific applications so that they automatically launch on demand. This allows users to work remotely and always be connected to the corporate network without the need to initiate a VPN connection.

    A new feature (shown below) with Windows Server 2012 R2, System Center 2012 R2 Configuration Manager, and Windows 8.1 is the ability to configure applications to initiate the VPN connection when the application is launched.

    5

    Users can work from the device of their choice to access corporate resources regardless of location

    New in Windows Server 2012 R2 are the Web Application Proxy and Work Folders. The Web Application Proxy provides the ability to publish access to internal resources and to optionally require Multi-Factor Authentication at the edge.

    Here’s an example of how it might work:

    • After the user registers her device and enrolls it for management, she is given access to the Company Portal app.
    • From the Company Portal, she installs (side-loads, in this example) a line of business app.
    • When she launches the app, the app contacts the Web Application Proxy to get access to the backend web service it needs.
    • The Web Application Proxy redirects the app to authenticate with AD FS.
    • AD FS has been configured to challenge the device for the certificate acquired via device registration (Workplace Join).
    • AD FS verifies that the user is authorized to access this corporate resource from this specific device. However, in this example, AD FS has been configured to also challenge the user for an additional factor of authentication when a device is connecting from the Internet.
    • AD FS calls into the multi-factor authentication (MFA) plug-in (supports integration with any 3rd party MFA provider). For example, the MFA plug-in may challenge the user to enter a code on their phone.
    • Once the multi-factor authentication requirement is satisfied, AD FS completes the authentication.
    • The Web Application Proxy then permits the app to access its backend service.

    Work Folders is a new file sync solution that allows users to sync their files from a corporate file server to their devices. The protocol for this sync is HTTPS based. This makes it easy to publish via the Web Application Proxy. This means that users can now sync from both the Intranet and the Internet. It also means the same AD FS-based authentication and authorization controls described above can be applied to syncing corporate files. The files are then stored in an encrypted location on the device. These files can then be selectively removed when the device is un-enrolled for management.

    6

     

    Unify your environment

    On-premises and cloud-based management of devices within a single console

    One important data point for us when we planned People-centric IT was the feedback we gathered from customers about the need to help reduce client management infrastructure costs and complexity. To do this, we worked hard to integrate Configuration Manager and Windows Intune. Our vision was for IT teams to use the Configuration Manager Administrator console to “manage devices where they live,” on-premise desktops and laptops can be serviced through existing on-prem infrastructure, and Internet-connected devices can be serviced through cloud infrastructure.

    All of this functionality is now available – the management of all of these devices and all of this infrastructure can be in one place with the Configuration Manager console which is already very widely used. Client management and security are now offered in a unified single solution which makes it easier to manage devices and applications, and to address threats and non-compliance. If you’re a current Configuration Manager customer, adding the Windows Intune cloud-based management is quick and easy: Just deploy an Intune connector to your existing System Center 2012 Configuration Manager deployment and you’re ready to go.

    7

    Simplified, user-centric application management across devices

    With Configuration Manager and Windows Intune, we’ve made it easy to ensure that applications are delivered in the optimal method for each device to ensure worker productivity. Configuration Manager allows the administrator to define the application once and then target it to a user or group. It evaluates the user’s device type and network connection capabilities, and then delivers the appropriate method (local installation, App-V, RemoteApp, etc). As a result, whether your employee is using a laptop, VDI session, or iPad – or all of these – you can deliver the app to that user with the best experience on each device.

    Because of the integration between Windows Intune and Configuration Manager, you can also extend application delivery to all major device types while still centrally managing application delivery across devices from a single console (see graphic below). Applications can include locally-installed MSI packages or App-V applications on Windows devices, remote applications using Microsoft virtualization solutions, web links, or public applications stored in the Windows Store, App Store, or Google Play.

    8

    Comprehensive settings management across platforms, including certificates, VPNs, and wireless network profiles

    We’ve substantially expanded our settings management capabilities across platforms, including certificates, VPNs, and wireless network profiles. Policies can be applied across various devices and operating systems to meet compliance requirements, to the extent of the capabilities exposed on those platforms and we have extended native management for Windows RT, iOS and Android. IT teams can provision certificates or VPN and Wi-Fi profiles on mobile devices, and get a full app inventory and application push install for corporate-owned devices. There is also functionality to look into the inventory of “managed” apps and publishing of apps for personal devices, and IT teams can remotely wipe and unregister devices from management system (as supported by each operating system).

    IT can better protect corporate information and mitigate risk by being able to manage a single identity for each user across both on-premises and cloud-based applications

    As users blend their work and personal lives, and as organizations adopt a mixture of traditional on-premises and cloud-based solutions, IT teams need a way to consistently manage the user’s identity and provide users with a single sign-on to all their resources.  We’re helping IT departments do this by providing users with a common identity across on-premises or cloud-based services leveraging existing Windows Server Active Directory investments, and then connecting to Windows Azure Active Directory

    A common part of connecting on-prem AD to Azure AD is deploying Active Directory Federation Services. Windows Server 2012 R2, we have significantly enhanced AD FS to be easier to deploy and configure, and we’ve tightly integrated it with the Web Application Proxy for simple app publishing (see graphic below).

    9

     

    Help protect data

    IT can access managed mobile devices to remove corporate data and applications in the event that the device is lost, stolen, or retired from use

    Whether a device is lost, stolen or simply being repurposed, there will be times when IT needs to ensure that the corporate information stored on the device is no longer accessible. With the R2 wave of releases, we have added the ability to selectively wipe corporate information while leaving personal data intact.

    Content removed when retiring a device

    Windows 8.1 Preview

    Windows 8 RT

    Windows Phone 8

    iOS

    Android

    Company apps and associated data installed by using Configuration Manager and Windows Intune

    Uninstalled and sideloading keys are removed.

    In addition any apps using Windows Selective Wipe will have the encryption key revoked and data will no longer be accessible.

    Sideloading keys removed but remain installed.

    Uninstalled and data removed.

    Uninstalled and data removed.

    Apps and data remain installed.

    VPN and Wi-Fi profiles

    Removed.

    Not applicable.

    Not applicable.

    Removed.

    VPN: Not applicable. Wi-Fi: Not removed.

    Certificates

    Removed and revoked.

    Not applicable.

    Not applicable.

    Removed and revoked.

    Revoked.

    Settings

    Requirements removed.

    Requirements removed.

    Requirements removed.

    Requirements removed.

    Requirements removed.

    Management Client

    Not applicable. Management agent is built-in.

    Not applicable. Management agent is built-in.

    Not applicable. Management agent is built-in.

    Management profile is removed.

    Device Administrator privilege is revoked.

    IT can set policy-based access control for compliance and data protection.

    With users working on personal devices, there are real challenges to ensure compliance standards are met and that information is protected.   Inside Windows Server 2012 R2, we’ve added new capabilities in the Web Application Proxy, AD FS, and Work Folders to make it easy for IT teams to make resources available but also remain in control of data. 

    With the Multi-Factor Access Control capability in AD FS, access control policies can be authored using multiple criteria, including the identity of the user, the identity of the device, whether the request is coming from intranet or extranet, and any additional authentication factors used to identify the user.

    As we showed at the TechEd Europe keynote in Madrid, Work Folders is integrated with Dynamic Access Control, providing the ability to automatically classify information based on content, and perform tasks such as protecting with Rights Management Services – even for data that is created and stored on clients!

    11

     

    Bringing it all together!

    To see People-centric IT, including System Center 2012 R2 Configuration Manager, Windows Intune, and Windows Server 2012 R2 in action, you can watch a complete presentation and end-to-end demonstration from the TechEd North America Foundational Session here. You can also learn more about People-centric IT by downloading the People-centric IT Preview Guide.

    Be sure to download System Center 2012 R2 Preview Configuration Manager and Windows Server 2012 R2 Preview today!

    * * *

     

    Over the last 20 years I have led a number of PC/Device management teams, and I have seen every possible variety of software solution along the way. I truly believe that what we are delivering in this 2012 R2 wave is the single most complete and comprehensive solution that has ever been released for enabling users across PC and devices. It’s amazing to see what our early foundations have helped us build for our users today.

    These 2012 R2 products deliver a unified solution to end-users across PCs and devices, and, when you consider the need for this product to be powerful and scalable for the IT teams using it – there is simply no better platform for IT pros.

    I encourage everyone to spend a few minutes today creating an account on Windows Intune (you can use the entire solution – with no capabilities held back – for a free 90 day trial) and test drive the management power available in this remarkable product. Try enrolling a few PCs and devices to the service* and start experimenting with managing your devices from the cloud – I think you’ll be impressed.

    - Brad

     

    NEXT STEPS:

    To learn even more about the technical topics discussed today, check out these posts from our engineering teams:

     

     

    * In the current production service you will not see the complete set of device management capabilities enabled yet – we are running an invitation-only customer preview on a pre-production service. You will see these capabilities later this year.