Cloud Insights from Brad Anderson, Corporate Vice President, Enterprise Client & Mobility
Check out my appearance on the latest episode of The Edge Show with Rick Claus.
In this episode I talk about some of the behind-the-scenes elements in the “What’s New in 2012 R2” series, with an emphasis on the first pillar covering PCIT. I’ll meet up with Rick again later to discuss each of the remaining pillars from the series – Transform the Datacenter and Enable Modern Business Apps.
Thanks to everyone who sent in questions for this video, and for all the feedback on the series so far. Starting next Monday, we start a four week look at the second pillar, Transform the Datacenter, with a couple posts on what R2 can do for a cloud OS infrastructure.
This week’s episode is a repost of an interview I did with TechNet Radio earlier this week. In this interview I talk with host Kevin Remde about the new 2012 R2 wave of products, and we talk about the first two posts (covering People-centric IT) in our new nine-part 2012 R2 series.
Check out the TechNet Radio site to see more interviews with industry leaders, and don’t forget to bookmark the 2012 R2 archive.
If you have a question for a future “In the Cloud” podcast, just let me know!
An important thing to keep in mind with either of these questions is that every organization has their own unique journey to the cloud. There are a lot of different workloads that run on Windows Server, and the reality is that these various workloads are moving to the cloud at very different rates. Web servers, e-mail and collaboration are examples of workloads moving to the cloud very quickly. I believe that management, and the management of smart devices, will be one of the next workloads to make that move to the cloud – and, when the time comes, that move will happen fast.
Using a SaaS solution is a move to the cloud, and taking this approach is a game changer because of its ability to deliver an incredible amount of value and agility without an IT pro needing to manage any of the required infrastructure.
Cloud-based device management is a particularly interesting development because it allows IT pros to manage this rapidly growing population of smart, cloud-connected devices, and manage them “where they live.” Today’s smart phones and tablets were built to consume cloud services, and this is one of the reasons why I believe that a cloud-based management solution for them is so natural. As you contemplate your organization’s move to the cloud, I suggest that managing all of your smart devices from the cloud should be one of your top priorities.
I want to be clear, however, about the nature of this kind of management: We believe that there should be one consistent management experience across PC’s and devices.
Achieving this single management experience was a major focus of these 2012 R2 releases, and I am incredibly proud to say we have successfully engineered products which do exactly that. The R2 releases deliver this consistent end-user experience through something we call the “Company Portal.” The Company Portal is already deployed here at Microsoft, and it is what we are currently using to upgrade our entire workforce to Windows 8.1. I’ve personally used it to upgrade my desktop, laptop, and Surface – and the process could not have been easier.
In this week’s post, Paul Mayfield, the Partner Program Manager for System Center Configuration Manager/Windows Intune, and his team return to discuss in deep technical detail some of the specific scenarios our PCIT team has enabled (cloud-based management, Company Portal, etc.). Their work is a great example of the “One Microsoft” principle behind our company’s new reorganization. The PCIT team worked across teams and divisional boundaries to holistically address customer needs across Microsoft products.
I’m excited for all of our partners and customers to immerse themselves in this new technology and begin benefitting from R2’s new and improved features.
* * *
Last week, we discussed some of the new capabilities coming in the R2 wave of releases later this year. These new capabilities enable IT Pros to both embrace the Consumerization of IT and support the needs of their users to securely access their apps and data from any device anywhere. We have been referring to this new set of capabilities as “People Centric IT” (PCIT) because it puts the end user at the center of all that we do.
We don’t just think about these features differently, however – we’ve also engineered them differently. Engineering teams across major product lines and divisions collaborated to define these people-centric scenarios, and, together, we have executed against a common set of engineering milestones. Now, after working closely together for months, we are releasing together too.
In today’s post, we’ll examine a few examples of holistic end-to-end customer scenarios that are a result of our cross-company collaboration. Specifically, we’ll look at:
Each of these three examples combine and maximize capabilities from across Windows, Windows Server, System Center and Windows Intune. These examples include features like Work Folders, the Web Application Proxy, Active Directory Federation Services, Unified Device Management in System Center Configuration Manager, the new Modern VPN Platform, and the Company Portal.
The diagram below articulates the major components at work in the People Centric IT pillar. It’s not necessary for each of these parts to be deployed in any given scenario. Instead IT teams can deploy just the parts needed to successfully implement the scenarios they care about. In the scenarios detailed below we’ll refer back to the technologies on this diagram.
View of major components that provide People Centric IT scenarios.
This first example examines one way to enable users to access their work files on personal devices, as well as how to mitigate risks around compliance and information leakage. This example focuses on the new Work Folders feature in Windows Server 2012 R2.
Work Folders is a new file sync feature that compliments SkyDrive Pro. Where SkyDrive Pro gives users access to their SharePoint data, Work Folders provides access to files on traditional file servers. With Work Folders, users sync data to a corporate backend file server. This means you can use all of the tools for file servers with Work Folders to manage and secure this data. Windows 8.1 will include a built in Work Folders client. Also, soon after we release Windows 8.1, we will release Work Folder support for Windows 7 and iPad*.
The basic steps to enabling secure file access to personal devices are:
Once this has been done, our services work together to secure your data:
Combining all these capabilities allows you to provide data access to your users (who continue using their BYOD devices) while still protecting your corporate data.
With all of this in mind, here’s an example of what it looks like in action.
As an IT Pro, your first step is to enable Work Folders on your file servers. This simple process is part of configuring the File Server role in Windows 2012 R2. Once complete, authorized users can manually set up Work Folders on their devices.
However, you don’t have to require users to configure themselves for Work Folders. You can use System Center Configuration Manager and Windows Intune to automatically provision devices with Work Folders settings to personal devices, and Group Policy for domain-joined machines. In the new R2 release of Config Manager, we’ve added native support for configuring Work Folder policy. You can target this policy based on a rich set of criteria. Once configured, any device that is domain joined, managed by Config Manager, or enrolled for Mobile Device Management (MDM) in Windows Intune receives Work Folder configuration as part of normal device provisioning.
Since Work Folders operates over an HTTPS-based protocol, you can now publish Work Folders to the Internet using the new Web Application Proxy (or any industry standard web publishing solution). This allows controlled access to files on Internet-connected devices, even if those devices never connect to the corporate network. The Web Application Proxy integrates with AD FS. This means you can use AD FS to restrict access devices that are Workplace Joined or domain joined. You can even require multi-factor authentication, leveraging the new plug-in model that AD FS provides in Windows Server 2012 R2.
Selecting the authentication provider for Work Folders to be AD FS. This integrates access control for Work Folders with our new Workplace Join features.
Finally, the IT pro can configure automatic classification of files using Dynamic Access Control. This can include keying in on sensitive words, phrases or regular expressions in a document (such as credit card numbers or the word “confidential”) and cause those files to be automatically classified as high impact and automatically encrypted and protected with RMS. These encryption rules help reduce information leakage of sensitive information. For example, when Joe (a hypothetical information worker) authors a sensitive customer information document on his desktop, this document synchronizes to the Work Folders share on the file server. The file server then automatically scans the document, tags it as sensitive, and applies RMS protection. The RMS protected version of the document then synchronizes back to Joe’s desktop and all other Work Folders enabled devices. This entire process is seamless to Joe so that he can continue working on the sensitive document without worrying about compliance or information leakage (e.g. if this device is lost or stolen, the sensitive data on it cannot be accessed).
A major focus of the R2 development was to create a user experience that was simple and familiar. Part of this simplicity included making the protection and compliance automatic (i.e. not relying on users taking the necessary actions to protect corporate data).
To keep the user experience simple and familiar, we chose to present Work Folders as just another folder on the device. This allows users to access and store work data in a single place using a normal folder. The user doesn’t know that it happens to synchronize with other devices through a backend file server. In function, this is similar to how SkyDrive and SkyDrive Pro work.
Work Folders show up as a normal folder in the user’s profile on Windows.
In order to make compliance and protection automatic, we designed Work Folders so that the IT pro determines what policies are applied to what devices and what files. IT Pros manage the data using standard file server data management capabilities, thus backup, retention, classification and encryption are all handled for the end user automatically. In addition to this, Work Folder files are always synchronized to folders on the users’ devices that are separately encrypted just for this purpose. This allows the files to be wiped from the device in a secure manner if/when that is necessary.
Configuring basic policy requirements for personal devices accessing Work Folders.
Back to Hypothetical Joe: Suppose Joe buys a new Surface RT and wants to access files from work. He simply has to Workplace Join his device and enroll for management. As part of enrollment, his Work Folder configuration will be automatically provisioned and his files will start to sync to an encrypted folder. Joe now has all his work files available to him. As he makes changes to these files on his Surface RT, the changes synchronize to his desktop at work and vice-versa. As he creates sensitive documents, they are automatically classified and RMS protected.
Later, when Joe leaves the company, the IT team removes his devices from management and Joe’s Surface RT automatically wipes (rendered inaccessible) his Work Folders data while leaving all his personal data intact.
This example provides an overview of one way to enable heterogeneous devices at work, and it demonstrates how to register an iOS device with your company’s Active Directory and then enroll it into System Center and Windows Intune.
Once a device is registered, IT pros can control access to company resources based on that user, the device and the location. Once the device is enrolled, IT pros can configure the device, monitor its compliance, publish line of business apps and perform other management tasks.
Although People-centric IT capabilities are great on Windows devices, these capabilities are not limited to Windows devices. We also put a lot of work into enabling first class support for heterogeneous devices in our People Centric IT capabilities. Putting users at the center of what we do includes enabling a broad set of devices.
Sometimes, MDM is talked about as a separate activity to manage mobile devices in addition to solutions deployed to manage PCs and other devices. We think about MDM more broadly. MDM is a part of managing all of your devices. We are building a single solution for managing all of your PCs and devices. We call this concept Unified Device Management.
To deliver UDM, we built the ability to attach Windows Intune to your System Center Configuration Manager deployment. This creates a single console (the Config Manager console) that can manage all types of devices –PCs, mobile devices, even embedded devices. The Config Manager console, when attached to Intune, includes MDM capabilities as a natural part of managing all devices. All device types need apps, data, settings, content and services, and Config Manager enables that in one place.
Here’s how you do it:
Setting up the iOS properties of the Windows Intune connector in the Config Manager console.
Uploading the APNs Certificate as part of enabling iOS management from Config Manager.
Once this is complete, your users are ready to begin working. They simply download the Company Portal app from Apple’s app marketplace. Seen below is the Company Portal app in the Windows marketplace. The iOS version will appear in the Apple Appstore as part of our R2 wave of releases later this year.
After this, the next time they launch the Company Portal app, they will be prompted for their credentials. From within the app, the user can then opt-in to the Windows Intune service. After this, your users can now access corporate apps and services.
The notification in the upcoming iOS Company Portal app that a device needs to be enrolled with Windows Intune service.
Enrolling the iOS device results in a new Management profile.
The enrollment process installs a certificate and MDM profile on the iOS device from the Intune cloud service. Once complete, that iOS device shows up in the Config Manager console alongside the other devices associated with that user. Any policies or apps targeted to users of that device that are appropriate for iOS will flow from Config Manager out to the Intune cloud and then down to the user’s device. In this way, the device gets provisioned with the settings, Wi-Fi profiles, VPN profiles, certificates, apps and other resources it needs. At the same time, IT is able monitor and manage the device on an ongoing basis.
All of the user’s devices (PCs and mobile) show up in one place together when Config Manager is attached to Windows Intune.
The next element to consider
is how to control access to corporate resources on that device? That’s where our integration with the Active Directory, the Web Application Proxy, and Active Directory Federation Services (AD FS) comes in.
With the R2 wave of releases it is now possible for applications and data to be published through a new Web Application Proxy service role as part of the Windows Server RRAS role. The Web Application Proxy then integrates with AD FS to control access to corporate resources based on a user’s identity, device, and location.
For example: Suppose an IT pro needs to publish an internal Sharepoint site to Internet-connected devices, but this access needs to be limited to known devices that belong to valid users. The IT pro can do this by configuring the AD FS role to restrict access to that Sharepoint site to devices that have been Workplace Joined. Attempts to access the published site will be denied when users connect from devices that are not Workplace Joined, but these same attempts will succeed from devices that are registered. Workplace Join is supported for both Windows and iOS devices.
Example experience of denying user access to a published web site from an iOS device because that device is not Workplace Joined.
To allow users to register iOS devices, you need to first enable the device registration service that is part of Active Directory Federation Services (AD FS). Once this is complete, users can simply be e-mailed a link to register their device. When the user clicks on that link they will be prompted for their credentials, and the registration service will record a user@device record in Active Directory and then issue a certificate profile to the iOS device. Once the user accepts this, they are good to go.
E-mail from IT pro with instructions to Workplace Join an iOS device.
Workplace Join, like any other Web Application Proxy -published IT service, includes the ability to require multi-factor authentication.
Installing the Workplace Join profile on an iOS device.
The profile is successfully installed and the iOS device is now Workplace Joined. Attempts to access the published website will now succeed, and IT can audit and control the access.
When the user attempts to access the published Sharepoint site, the Web Application Proxy will refer the device to AD FS for authentication and authorization. AD FS will challenge the user for their credentials and also challenge the device for the certificate that was installed as part of the Workplace Join. Once AD FS recognizes this as a known device belonging to a valid user, it allows the authentication to succeed and the user receives controlled access to the Sharepoint site.
In this example, we will examine a way to enable VPN connectivity on personal Windows 8.1 (including Windows 8.1 RT) devices. To do this, we will look at how to bring Windows 8.1, Windows Server 2012 R2, System Center and Windows Intune together.
Windows 8.1 includes both Microsoft and third party VPN client applications (plugins) already installed. These VPN plugins are included with Windows 8.1 OS (inbox) and do not require an application download. The overall experience blends with Windows, providing end users a consistent and integrated experience. The new VPN platform capabilities also include VPN profile management options and Automatic VPN capabilities. Automatic VPN connections dynamically come up or down as needed. You configure Automatic VPN for both Namespaces and Applications. The VPN connection is launched automatically whenever access to corporate resources are requested.
In addition to including third party VPN plug-ins, Windows 8.1 also includes built-in manageability that allows multiple options for VPN profile management, including:
A cool new feature in the R2 wave of releases is the native ability to manage VPN profiles in System Center. We support both Microsoft and third party VPNs. You can provision all of your devices from the Config Manager console regardless of whether the device is a PC being managed by the Config Manager agent, or a mobile device enrolled in Windows Intune. When a user enrolls their device for MDM management, the system automatically provisions the appropriate VPN profiles. Additionally, the device enrolls for any certificates the VPN profile requires. Finally, the Automatic VPN policies are applied to the device. When provisioning is complete, the user is ready to start using the VPNs.
Creating a VPN Profile in Config Manager. The profile will be delivered to managed PCs and mobile devices.
Selecting the VPN type in Config Manager. Our new platform supports both Microsoft and third party VPNs.
Selecting the authentication type in Config Manager. Again, we support both Microsoft and third party options.
Configuring the Automatic VPN settings. You can cause a connection to raise as needed based on a namespace or an application.
Whether you use Microsoft’s VPN gateways (the Remote Access role in Windows Server) or a third party VPN gateway, the experience remains the same to the end user. Our new VPN platform in Windows 8.1 integrates VPN into the Windows experience. This means it is no longer necessary to send special instructions to users. By simply enrolling for management, the profiles, certificates and Automatic VPN rules show up on the device. The user then attempts to access a corporate resource and the VPN automatically connects to enable it. If the VPN ever needs additional input from the user (e.g. a password or second factor of authentication), then a toast notification will take the user to a standard network experience to provide it. This experience is similar to joining a new Wi-Fi or mobile broadband network.
Example of providing additional user input as needed by the VPN – all integrated into the standard charms bar, even for third party VPNs.
In addition to supporting VPN management through System Center, we also support VPN scripting using built-in PowerShell commands. You can use these scripts in imaging or automation workflows – or you can publish them via an out-of-band mechanism such as an external website or a file share. The scripts create VPN profiles and configure remote access on the device.
/* Adding name trigger rules */
PS C:\> Add-VpnConnectionTriggerDnsConfiguration -ConnectionName "test" -DnsSuffix ".corp.contoso.com" -DnsIPAddress "1.1.1.10"
PS C:\> Add-VpnConnectionTriggerDnsConfiguration -ConnectionName "test" -DnsSuffix ".domain1.corp.contoso.com"
/* Adding applications trigger rules – modern app*/
PS C:\> Add-VpnConnectionTriggerApplication -ConnectionName "test" -ApplicationID "microsoft.windowscommunicationsapps_8wekyb3d8bbwe“
/* Adding applications trigger rules – classic app*/
PS C:\> Add-VpnConnectionTriggerApplication -ConnectionName "test" -ApplicationID "C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Microsoft Product Studio“
/* Configuring suffix search list*/
PS C:\> Set-VpnConnectionTriggerDnsConfiguration -ConnectionName "test" -DnsSuffixSearchList ("domain2.corp.contoso.com","domain3.corp.contoso.com")
/* Adding More entries to Trusted Network List */
PS C:\> Add-VpnConnectionTriggerTrustedNetwork -ConnectionName "test" -DnsSuffix "corp.contoso2.com"
This overview represents some of the most dynamic features available from this new R2 wave, but there are dozens more. As IT teams deploy and operate the R2 products and services together, users will get to see even more examples of the power and functionality of these platforms.
This strength from interoperation is a result of our decision to approach the engineering of this R2 wave as a hyper-collaborative, highly structured, cross-Microsoft effort. The result of these thousands of hours spent working across teams and across the company is a new suite of products that work great together and enable our community of users to approach their jobs with better-than-ever tools.
In the coming weeks, the other pillar owners will further demonstrate these points as this series examines the Cloud OS, Hybrid IT, and Modern App capabilities built into these 2012 R2 products. You can keep track of each of these posts by bookmarking this link, or by following me on Twitter for regular updates.
- Brad
To learn even more about the technical topics discussed today, check out these posts from our engineering teams:
* iPad is a trademark of Apple Inc.
The modern workforce isn’t just better connected and more mobile than ever before, it’s also more discerning (and demanding) about the hardware and software used on the job. While company leaders around the world are celebrating the increased productivity and accessibility of their workforce, the exponential increase in devices and platforms that the workforce wants to use can stretch a company’s infrastructure (and IT department!) to its limit.
If your IT team is grappling with the impact and sheer magnitude of this trend, let me reiterate a fact I’ve noted several times before on this blog: The “Bring Your Own Device” (BYOD) trend is here to stay.
Building products that address this need is a major facet of the first design pillar I noted last week: People-centric IT (PCIT).
In today’s post (and in each one that follows in this series), this overview of the architecture and critical components of the PCIT pillar will be followed by a “Next Steps” section at the bottom. The “Next Steps” will include a list of new posts (each one written specifically for that day’s topic) developed by our Windows Server & System Center engineers. Every week, these engineering blogs will provide deep technical detail on the various components discussed in this main post. Today, these blogs will systematically examine and discuss the technology used to power our PCIT solution.
Our goal is to walk you through the architecture, examples and critical components to each pillar. The “Next Steps” section provides an easy way to review all of the supporting analysis from our engineering teams about the inner workings of the technology at work within Windows Server & System Center.
Getting in depth on PCIT is a major topic of countless discussions I’ve had with business leaders all over the world. In each of these meetings my feedback has been pretty straightforward: The number and diversity of devices in your company is only going to increase; this is a trend you really want to embrace.
This doesn’t mean that there aren’t organizations doing their best to fight back the tide, but I do believe that the companies who embrace the opportunity to keep their employees connected and productive will see a justifiable return on their investment (ROI).
Making that ROI materialize, however, will depend on how an organization embraces and enables the BYOD trend. In particular, every IT team will need to carefully and individually identify the right balance of access to corporate resources from these devices and then ensure that the necessary security and compliance protocols are enforced. Getting this right can have a big impact on employee morale, and a hassle-free IT environment can even impact retention.
We have made a big investment in PCIT because we want to enable and empower IT pros to provide their workforce with strong and stable support for the devices they want to work from, while also providing these IT teams with the means to take the appropriate amount of control of those devices – both corporate-supplied and user-owned. The PCIT solution detailed below enables IT Professionals to set access policies to corporate applications and data based on three incredibly important criteria:
The solutions we’ll detail in these PCIT posts enable IT teams to tightly control devices based on the type of work they do and the sensitivity of the corporate data they access. For example, very strict security protocols, usage controls, and data access requirements are necessary for point-of-sale devices, devices on a manufacturing line, devices in a heavily regulated pharmaceutical lab, bank-teller PCs, etc. The need for these kinds of devices to be connected and managed will continue to be critically important. On the other end of the spectrum you have devices such as personal phones and tablets that IT simply cannot control in the same way (Here is a test: Try telling your end-users that you’re going to disable the cameras on their personal phones and see what their reaction is!).
What’s required here is a single management solution that enables specific features where control is necessary and appropriate, and that also provides what I call “governance,” or light control when less administration is necessary. This means a single pane of glass for managing PCs and devices. Far too often I meet with companies that have two separate solutions running side-by-side – one for every PC, and the second to manage devices. Not only is this more expensive and more complex, it creates two disjointed experiences for end users and a big headache for the IT pros responsible for managing.
In today’s post, Paul Mayfield, the Partner Program Manager for the System Center Configuration Manager/Windows Intune team, discusses how everything that Microsoft has built with this solution is focused on creating the capability for IT teams to use the same System Center Configuration Manager that they already have in place managing their PCs and now extend this management power to devices. This means double the management capabilities from within the same familiar console. This philosophy can be extended even further by using Windows Intune to manage devices where they live – i.e. cloud-based management for cloud-based devices. Cloud-based management is especially important for user-owned devices that need regular updates.
This is an incredible solution, and the benefit and ease of use for you, the consumer, is monumental.
As you may have seen at the recent TechEd events, we have added several new capabilities across Windows, Windows Server, System Center and Windows Intune this year. These new capabilities are intended to enable our customers to embrace the Consumerization of IT and enable a Bring Your Own Device (BYOD) scenario in their organizations around the world.
We refer to these new capabilities as “People-centric IT.”
People-centric IT (PCIT) is about helping people to work on the devices they choose. We’re providing users access to their apps and data on any of their devices in any location. The challenge this presents to IT teams is considerable: As soon as users are working on a device that IT does not manage (or even have any knowledge of), it becomes very difficult to retain control of sensitive corporate information and to be able to respond to situations such as the device being sold, lost, or stolen.
In particular, the challenges faced by IT teams responsible for a modern corporate infrastructure come from four key areas:
With the 2012 R2 wave of releases (e.g. Windows Server, System Center Configuration Manager, and the next release of Windows Intune), we are helping our customers answer these challenges. Engineers across each of those teams have jointly planned and executed their scenarios across a common set of engineering milestones, and we have delivered these scenarios across three primary areas that drove our priorities and investments in engineering:
Now let’s look at each of these scenarios, and their benefits, in detail.
Simple registration and enrollment for users adopting Bring Your Own Device (BYOD) programs
We’re providing new ways for users to opt-into receiving IT services on their devices. Users can perform a Workplace Join to register their devices in Active Directory and they can enroll their devices for management in Configuration Manager and Windows Intune.
You can think of Workplace Join as being a light form of Domain Join but for personal mobile devices. Registered devices are recorded in the Active Directory and they are issued credentials. However, they don’t support Group Policy or scripting. Instead, you can manage the device by enrolling it for mobile device management.
We’re making it simple and easy for users to register their device with the Active Directory. They will want to register their devices in order to get access to corporate resources and in order to enable single-sign-on (SSO).
Based on the user’s name and password, we’ll look up the tenant (in the case of Azure Active Directory) or look up the local Active Directory Federation Services (AD FS) registration server (in the case of on-prem registration). Then we trigger the device to enroll for a certificate from its registration service.
As part of that Workplace Join, we’ve created a user@device record in the Active Directory. In this way, we’re enabling your existing AD infrastructure to be extended to accommodate mobile devices. This allows us to provide the IT Pro with an inventory of devices and their users, and to audit the access that will be subsequently granted to those users on those devices. The certificate issued to the device includes both the identity of that device and the identity of the authenticated user. Access to resources published via our Web Application Proxy (see below), or to any other resource that relies on AD FS for authentication, will rely on this certificate for authentication.
One thing worth noting: The act of registering the device to Active Directory does not allow IT to control the device in any manner -- that’s is covered by enrollment. Workplace Join is only used to govern access to corporate resources and to enable SSO.
In addition to registering devices with Active Directory, we’re also making it easy for users to enroll their devices into the Windows Intune management service. Users will want to do this in order to get their devices provisioned, and in order to install corporate apps on their devices. To do this, the user simply enters his or her user name and password to enroll the device, and the service will then look up the user’s tenant and trigger Mobile Device Management (MDM) enrollment.
MDM enrollment varies by device. The basics of MDM enrollment include issuing a certificate to authenticate the device to the management system, installing management profiles, and registering a device with an appropriate notification service. As part of the enrollment process, the user will be prompted to consent to allow some administrative control of the device to the IT department. Once enrollment is complete, the management system triggers device provisioning. The device will contact the Windows Intune service and download settings, WiFi Profiles, VPN profiles, side loading keys, apps and more. The device may also enroll for additional certificates that can be used for network authentication or for other security purposes.
A user can decide to register the device with Active Directory or enroll the device in Windows Intune – or both. We recommend both because the full suite of PCIT services and experiences are only available to devices that are both registered and enrolled. This is the best experience for the user and it provides the best protection for the company.
Access to company resources consistently across devices
The company portal (see sample screenshot below) provides users with a consistent interface from which they can gain access to applications (both internal applications and links to public stores), manage their own devices to perform tasks such as remote wipe, and also gain access to their data with integration to Work Folders.
Automatically connect to internal resources when needed
As part of enrolling for management, users can have their devices provisioned with certificates, WiFi profiles, VPN profiles, and DirectAccess configuration. The VPN profiles can be associated with DNS names or specific applications so that they automatically launch on demand. This allows users to work remotely and always be connected to the corporate network without the need to initiate a VPN connection.
A new feature (shown below) with Windows Server 2012 R2, System Center 2012 R2 Configuration Manager, and Windows 8.1 is the ability to configure applications to initiate the VPN connection when the application is launched.
Users can work from the device of their choice to access corporate resources regardless of location
New in Windows Server 2012 R2 are the Web Application Proxy and Work Folders. The Web Application Proxy provides the ability to publish access to internal resources and to optionally require Multi-Factor Authentication at the edge.
Here’s an example of how it might work:
Work Folders is a new file sync solution that allows users to sync their files from a corporate file server to their devices. The protocol for this sync is HTTPS based. This makes it easy to publish via the Web Application Proxy. This means that users can now sync from both the Intranet and the Internet. It also means the same AD FS-based authentication and authorization controls described above can be applied to syncing corporate files. The files are then stored in an encrypted location on the device. These files can then be selectively removed when the device is un-enrolled for management.
On-premises and cloud-based management of devices within a single console
One important data point for us when we planned People-centric IT was the feedback we gathered from customers about the need to help reduce client management infrastructure costs and complexity. To do this, we worked hard to integrate Configuration Manager and Windows Intune. Our vision was for IT teams to use the Configuration Manager Administrator console to “manage devices where they live,” on-premise desktops and laptops can be serviced through existing on-prem infrastructure, and Internet-connected devices can be serviced through cloud infrastructure.
All of this functionality is now available – the management of all of these devices and all of this infrastructure can be in one place with the Configuration Manager console which is already very widely used. Client management and security are now offered in a unified single solution which makes it easier to manage devices and applications, and to address threats and non-compliance. If you’re a current Configuration Manager customer, adding the Windows Intune cloud-based management is quick and easy: Just deploy an Intune connector to your existing System Center 2012 Configuration Manager deployment and you’re ready to go.
Simplified, user-centric application management across devices
With Configuration Manager and Windows Intune, we’ve made it easy to ensure that applications are delivered in the optimal method for each device to ensure worker productivity. Configuration Manager allows the administrator to define the application once and then target it to a user or group. It evaluates the user’s device type and network connection capabilities, and then delivers the appropriate method (local installation, App-V, RemoteApp, etc). As a result, whether your employee is using a laptop, VDI session, or iPad – or all of these – you can deliver the app to that user with the best experience on each device.
Because of the integration between Windows Intune and Configuration Manager, you can also extend application delivery to all major device types while still centrally managing application delivery across devices from a single console (see graphic below). Applications can include locally-installed MSI packages or App-V applications on Windows devices, remote applications using Microsoft virtualization solutions, web links, or public applications stored in the Windows Store, App Store, or Google Play.
Comprehensive settings management across platforms, including certificates, VPNs, and wireless network profiles
We’ve substantially expanded our settings management capabilities across platforms, including certificates, VPNs, and wireless network profiles. Policies can be applied across various devices and operating systems to meet compliance requirements, to the extent of the capabilities exposed on those platforms and we have extended native management for Windows RT, iOS and Android. IT teams can provision certificates or VPN and Wi-Fi profiles on mobile devices, and get a full app inventory and application push install for corporate-owned devices. There is also functionality to look into the inventory of “managed” apps and publishing of apps for personal devices, and IT teams can remotely wipe and unregister devices from management system (as supported by each operating system).
IT can better protect corporate information and mitigate risk by being able to manage a single identity for each user across both on-premises and cloud-based applications
As users blend their work and personal lives, and as organizations adopt a mixture of traditional on-premises and cloud-based solutions, IT teams need a way to consistently manage the user’s identity and provide users with a single sign-on to all their resources. We’re helping IT departments do this by providing users with a common identity across on-premises or cloud-based services leveraging existing Windows Server Active Directory investments, and then connecting to Windows Azure Active Directory.
A common part of connecting on-prem AD to Azure AD is deploying Active Directory Federation Services. Windows Server 2012 R2, we have significantly enhanced AD FS to be easier to deploy and configure, and we’ve tightly integrated it with the Web Application Proxy for simple app publishing (see graphic below).
IT can access managed mobile devices to remove corporate data and applications in the event that the device is lost, stolen, or retired from use
Whether a device is lost, stolen or simply being repurposed, there will be times when IT needs to ensure that the corporate information stored on the device is no longer accessible. With the R2 wave of releases, we have added the ability to selectively wipe corporate information while leaving personal data intact.
Content removed when retiring a device
Windows 8.1 Preview
Windows 8 RT
Windows Phone 8
iOS
Company apps and associated data installed by using Configuration Manager and Windows Intune
Uninstalled and sideloading keys are removed.
In addition any apps using Windows Selective Wipe will have the encryption key revoked and data will no longer be accessible.
Sideloading keys removed but remain installed.
Uninstalled and data removed.
Apps and data remain installed.
VPN and Wi-Fi profiles
Removed.
Not applicable.
VPN: Not applicable. Wi-Fi: Not removed.
Certificates
Removed and revoked.
Revoked.
Settings
Requirements removed.
Management Client
Not applicable. Management agent is built-in.
Management profile is removed.
Device Administrator privilege is revoked.
IT can set policy-based access control for compliance and data protection.
With users working on personal devices, there are real challenges to ensure compliance standards are met and that information is protected. Inside Windows Server 2012 R2, we’ve added new capabilities in the Web Application Proxy, AD FS, and Work Folders to make it easy for IT teams to make resources available but also remain in control of data.
With the Multi-Factor Access Control capability in AD FS, access control policies can be authored using multiple criteria, including the identity of the user, the identity of the device, whether the request is coming from intranet or extranet, and any additional authentication factors used to identify the user.
As we showed at the TechEd Europe keynote in Madrid, Work Folders is integrated with Dynamic Access Control, providing the ability to automatically classify information based on content, and perform tasks such as protecting with Rights Management Services – even for data that is created and stored on clients!
To see People-centric IT, including System Center 2012 R2 Configuration Manager, Windows Intune, and Windows Server 2012 R2 in action, you can watch a complete presentation and end-to-end demonstration from the TechEd North America Foundational Session here. You can also learn more about People-centric IT by downloading the People-centric IT Preview Guide.
Be sure to download System Center 2012 R2 Preview Configuration Manager and Windows Server 2012 R2 Preview today!
Over the last 20 years I have led a number of PC/Device management teams, and I have seen every possible variety of software solution along the way. I truly believe that what we are delivering in this 2012 R2 wave is the single most complete and comprehensive solution that has ever been released for enabling users across PC and devices. It’s amazing to see what our early foundations have helped us build for our users today.
These 2012 R2 products deliver a unified solution to end-users across PCs and devices, and, when you consider the need for this product to be powerful and scalable for the IT teams using it – there is simply no better platform for IT pros.
I encourage everyone to spend a few minutes today creating an account on Windows Intune (you can use the entire solution – with no capabilities held back – for a free 90 day trial) and test drive the management power available in this remarkable product. Try enrolling a few PCs and devices to the service* and start experimenting with managing your devices from the cloud – I think you’ll be impressed.
* In the current production service you will not see the complete set of device management capabilities enabled yet – we are running an invitation-only customer preview on a pre-production service. You will see these capabilities later this year.
Part 1 of a 9-part series.
Over the last three weeks, Microsoft has made an exciting series of announcements about its next wave of products, including Windows Server 2012 R2, System Center 2012 R2, SQL Server 2014, Visual Studio 2013, Windows Intune and several new Windows Azure services. The preview bits are now available, and the customer feedback has been incredible!
The most common reaction I have heard from our customers and partners is that they cannot believe how much innovation has been packed into these releases – especially in such a short period of time. There is a truly amazing amount new value in these releases and, with this in mind, we want to help jump-start your understanding of the key scenarios that we are enabling.
As I’ve discussed this new wave of products with customers, partners, and press, I’ve heard the same question over and over: “How exactly did Microsoft build and deliver so much in such a short period of time?” My answer is that we have modified our own internal processes in a very specific way: We build for the cloud first.
A cloud-first design principle manifests itself in every aspect of development; it means that at every step we architect and design for the scale, security and simplicity of a high-scale cloud service. As a part of this cloud-first approach, we assembled a ‘Scenario Focus Team’ that identified the key user scenarios we needed to support – this meant that our engineers knew exactly what needed to be built at every stage of development, thus there was no time wasted debating what happened next. We knew our customers, we knew our scenarios, and that allowed all of the groups and stakeholders to work quickly and efficiently.
The cloud-first design approach also means that we build and deploy these products within our own cloud services first and then deliver them to our customers and partners. This enables us to first prove-out and battle-harden new capabilities at cloud scale, and then deliver them for enterprise use. The Windows Azure Pack is a great example of this: In Azure we built high-density web hosting where we could literally host 5,000 web servers on a single Windows Server instance. We exhaustively battle-hardened that feature, and now you can run it in your datacenters.
At Microsoft we operate more than 200 cloud services, many of which are servicing 100’s of millions of users every day. By architecting everything to deliver for that kind of scale, we are sure to meet the needs of enterprise anywhere and in any industry.
Our cloud-first approach was unique for another reason: It was the first time we had common/unified planning across Windows Client, Windows Server, System Center, Windows Azure, and Windows Intune. I know that may sound crazy, but it’s true – this is a first. We spent months planning and prioritizing the end-to-end scenarios together, with the goal of identifying and enabling all the dependencies and integration required for an effort this broad. Next we aligned on a common schedule with common engineering milestones.
The results have been fantastic. Last week, within 24 hours, we were able to release the previews bits of Windows Client 8.1, Windows Server 2012 R2, System Center 2012 R2, and SQL Server 2014.
By working together throughout the planning and build process, we established a common completion and Release to Manufacturing date, as well as a General Availability date. Because of these shared plans and development milestones, by the time we started the actual coding, the various teams were well aware of each dependency and the time to build the scenarios was much shorter.
The bottom-line impact of this Cloud-first approach is simple: Better value, faster.
This wave of products shows that the changes we’ve made internally allow us to deliver more end-to-end scenarios out of the box, and those scenarios are all delivered at a higher quality.
This wave of products demonstrates that the changes we’ve made internally allow us to deliver more end-to-end scenarios out of the box, and each of those scenarios are all delivered at a higher quality. This cloud-first approach also helps us deliver the Cloud OS vision that drives the STB business strategy.
The story behind the technologies that support the Cloud OS vision is an important part of how we enable customers to embrace cloud computing concepts. Over the next eight weeks, we’ll examine in great detail the three core pillars (see the table below) that support and inspire these R2 products: Empower People-centric IT, Transform the Datacenter, and Enable Modern Business Apps. The program managers who defined these scenarios and worked within each pillar throughout the product development process, have authored in-depth overviews of these pillars and their specific scenarios, and we’ll release those on a weekly basis.
Pillar
Scenarios
Empower People-centric IT
People-centric IT (PCIT) empowers each person you support to work virtually anywhere on PCs and devices of their choice, while providing IT with an easy, consistent, and secure way to manage it all. Microsoft's approach helps IT offer a consistent self-service experience for people, their PCs, and their devices while ensuring security. You can manage all your client devices in a single tool while reducing costs and simplifying management.
Transform the Datacenter
Transforming the datacenter means driving your business with the power of a hybrid cloud infrastructure. Our goal is to help you leverage your investments, skills and people by providing a consistent datacenter and public cloud services platform, as well as products and technologies that work across your datacenter, and service provider clouds.
Enable Modern Business Apps
Modern business apps live and move wherever you want, and Microsoft offers the tools and resources that deliver industry-leading performance, high availability, and security. This means boosting the impact of both new and existing applications, and easily extending applications with new capabilities – including deploying across multiple devices.
The story behind these pillars and these products is an important part of our vision for the future of corporate computing and the modern datacenter, and in the following post, David B. Cross, the Partner Director of Test and Operations for Windows Server, shares some of the insights the Windows Server & System Center team have applied during every stage of our planning, build, and deployment of this awesome new wave of products.
Historically, Microsoft’s approach to building quality Software and Services was to first focus on quality assurance techniques that were targeted to ensure the component and feature functionality behaved according to design specifications. The limitation of this approach was that it ignores the fact that customers do not use the components and features of products in isolation – they use combinations of them to fulfill the needs of their business. Understanding this important fact made it critically important to validate all the functionality within the product, and then ensure it met customer expectations for interoperability.
As we move to the cloud with a faster and more agile cadence, and combine this move with a more complex set of components and requirements, this uncertainty and dependence on long-term beta testing is no longer realistic. With the Windows Server 2012 R2 and System Center 2012 R2 engineering cycle, we dynamically evolved our engineering planning, execution, and quality validation processes by engraining an end-to-end focus on customer scenarios into all phases of the development lifecycle. Specifically, starting with Windows Server 2012 (and expanding in the Windows Server 2012 R2 release), we used customer-specific end-to-end scenarios to plan, prioritize, design, implement and validate the solutions we bring to market. This approach was analyzed and discussed in several sessions at TechEd 2013 a few weeks ago.
Our approach is to define scenarios from a customer point of view and develop a comprehensive understanding of both what they want to achieve and how they measure success. This scenario-centric approach to building the platform was the primary reason why our 2012 versions of Windows Server and System Center were so well received by our customers. This approach is now a critical part of how we build products and services going forward.
In the Windows Server and System Center division, we have applied this operational philosophy at every stage of development, and the result is a clearly defined value proposition for our customers and end users. The organizational principles we follow are simple:
This engineering model goes well beyond processes and tools – it is a mindset that places an emphasis on becoming a customer advocate. Our engineers have shifted their attention from the perspective of “does this feature work?” to “will this scenario delight our customers?” Validation and quality are, therefore, not only measured from actual deployments and measurements of success criteria, but also from customer feedback and satisfaction metrics that are provided throughout the engineering cycle.
In all of our current operations we practice both interface validation as well as end-to-end integration testing to validate quality holistically. In addition to this, when moving applications, services and other functionality to the cloud, we must also test in-production or “live” under varying conditions and over time with real world customers. Also, in keeping with a long-standing Microsoft tradition, we still always start by eating our own dog food by deploying every new scenario internally to experience and validate the capabilities of our innovation first hand. We learn from our own internal Services operations and, based on our early deployments, we ensure that we are delivering powerful end-to-end scenarios that enable quick and reliable business solutions.
With this in mind, we are applying the following steps to drive this focus and process in our ongoing engineering cycles:
We currently follow each of these steps for every new solution, and the current wave of products is an example of this philosophy at work. Our cloud-first focus has enabled common/unified planning which has in turn allowed our teams to collaborate to deliver integrated customer-focused scenarios.
Customer success is the inspiration for our customer-focused engineering, end-to-end product integration, and scenario validation. Windows Server 2012 R2, System Center 2012 R2, and the rest of this amazing wave of products, are a concrete example of our vision and promise for the Microsoft Cloud OS.
– Brad
On Thursday Gartner released their 2013 Magic Quadrant for x86 Server Virtualization Infrastructure, in which they positioned Microsoft as a “Leader” in the industry.
For anyone unfamiliar with the format of the graphic below, the Magic Quadrant is used by Gartner to present an apples-to-apples review of the capabilities of vendors in a specific part of a market. While the position of each vendor on the graphic is important, it's also important to read the strengths and cautions for each vendor in the report linked above.
It's great to see recognition like this from a third party like Gartner; Microsoft has been consistently moving ‘up’ and to the ‘right’ in the Magic Quadrant graphic for four consecutive years. I believe our positioning as a “Leader” confirms that Windows Server 2012, System Center 2012, and Hyper-V are powerful options for enterprise-scale virtualization.
I recommend downloading the full report (compliments of Microsoft) and reading it in detail.
Note:
This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from [insert client name or reprint URL].
Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose
Gartner, Magic Quadrant for x86 Server Virtualization Infrastructure, T. Bittman et al, 27 June 2013
.
Introduction: Beginning and Ending with Customer-specific Scenarios
Back in May, I discussed how technologies such as Windows Server 2012, Hyper-V, and System Center 2012 SP1, provide the most scalable, reliable, and feature-rich platform to run key, tier-1 workloads like SQL Server, SharePoint and Exchange, at the lowest cost.
To help customers virtualize these workloads, we’ve recently published a number of best practice whitepapers for the virtualization and management of SQL Server, SharePoint and Exchange, and we’ve also shared some phenomenal performance testing results which underscore that the Microsoft platform is unequivocally the best platform for virtualizing tier-1 workloads.
But I’m realistic – I understand that there are organizations who also run other tier-1 applications within their environments, and Microsoft wants to ensure that our customers can virtualize those other workloads with the same confidence they have when virtualizing Microsoft workloads.
One of the most common workloads within enterprise environments is SAP Enterprise Resource Planning (ERP), which is a solution that provides access to critical data, applications, and analytical tools, and it helps organizations streamline processes across procurement, manufacturing, service, sales, finance, and HR. For a demanding workload like SAP ERP, many of our customers assume that they will need to run the solution on physical servers – and this assumption is backed up by the large number of existing SAP benchmarks which highlight the huge scale and performance on a physical platform.
So what does that mean for customers who want to virtualize SAP ERP? Can it be virtualized successfully and deliver the necessary levels of performance required for tier-1 applications?
The answer is unequivocally, yes.
I’m proud to announce that, on June 24th, 2013, through a close collaboration between SAP, HP and Microsoft, a new world record was achieved and certified by SAP for a three-tier SAP Sales and Distribution (SD) standard application benchmark, running on a set of 2-processor physical servers.
The application benchmark resulted in 42,400 SAP SD benchmark users, 231,580 SAPS, and a response time of 0.99 seconds, showcasing phenomenal performance using a DBMS server with just 2 physical processors of 16 cores and 32 CPU threads.
The best part? Not only was SAP ERP 6.0 (with Enhancement Package 5) running on SQL Server 2012, on Windows Server 2012 Datacenter, but the configuration was completely virtualized on Hyper-V. In addition, this is the first SAP benchmark with virtual machines configured with 32 virtual processors, and subsequently, the first with SQL Server running in a 32-way virtual machine. The result is also more than 30% higher than a previous 2-processor/12 cores/24 CPU threads, virtualized configuration running on VMware vSphere 5.0.
It’s clear from this benchmark that with the massive scalability and enterprise features in Windows Server 2012 Hyper-V, along with HP’s ProLiant BL460c Gen8 servers, 3PAR StoreServ Storage and Virtual Connect networking capabilities, customers can virtualize their mission critical, tier-1 SAP ERP solution with confidence.
You can find the full details of the benchmark on the SAP Benchmark Site, and you can also read more information about running SAP on Windows Server, Hyper-V & SQL Server, over on the SAP on SQL Server Blog.
For more details visit: http://www.sap.com/benchmark
Benchmark performed in Houston, TX, USA on June 8, 2013. Results achieved 42,400 SAP Standard SD benchmark users, 231,580 SAPS and a response time of 0.99 seconds in a SAP three-tier configuration SAP EHP 5 for SAP ERP 6.0. Servers used for Application servers: 12 x ProLiant BL460c Gen8 with Intel Xeon E5-2680 @ 2.70GHz (2 processors/16 cores/32 threads) and 256GB using Microsoft Windows Server 2012 Datacenter on Windows Server 2012 Hyper-V. DBMS Server: 1 x ProLiant BL460c Gen8 with Intel Xeon E5-2680 @ 2.70GHz (2 processors/16 cores/32 threads) and 256GB using Microsoft Windows Server 2012 Datacenter on Windows Server 2012 Hyper-V using Microsoft SQL Server 2012 Enterprise Edition
VMWare ESX 5.0 based benchmark performed in Houston, TX, USA on October 11, 2011. Results achieved 32,125 SAP Standard SD benchmark users, 175,320 SAPS and a response time of 0.99 seconds in a SAP three-tier configuration SAP EHP 4 for SAP ERP 6.0. Servers used for Application servers: 10 x ProLiant BL460c G7 with Intel Xeon X5675 @ 3.06GHz (2 processors/12 cores/24 threads) and 96 GB using Microsoft Windows Server 2008 Enterprise on VMWare ESX 5.0. DBMS Server: 1 x ProLiant BL460c G7 with Intel Xeon X5675 @ 3.06GHz (2 processors/12 cores/24 threads) and 96 GB using Microsoft Windows Server 2008 Enterprise on VMWare ESX 5.0 using Microsoft SQL Server 2008 Enterprise Edition.
In place of this week’s podcast (since everyone is on their way back from Spain!), check out this replay of my TechEd Europe keynote from earlier this week.
I am absolutely thrilled to be delivering the Day 1 keynote at TechEd Europe for the third straight year. This is consistently an amazing event with the very best and brightest companies and IT pros across Europe, not to mention some equally amazing products to highlight and demonstrate.
There were some big announcements at TechEd North America (e.g. a refresh of our key enterprise IT solutions like Windows Server & System Center 2012 R2, Windows Intune and SQL Server 2014, as well as Visual Studio 2013, new Windows Azure BizTalk services, Azure per-minute pricing, new MSDN subscriber benefits, etc.), and TechEd Europe is going to have some surprises of its own.
In particular, I’ll continue telling the story that began at TechEd North America in New Orleans by announcing the immediate availability of preview software for Windows Server 2012 R2, System Center 2012 R2, and SQL Server 2014.
These products are going to have a massive impact on companies around the world – and IT pros are going to see the traditional boundaries between datacenters vanish, and a true hybrid cloud emerge.
Microsoft has made a big bet on what we call our cloud-first design principles, and many companies are already benefiting. Key examples in Europe include Telefónica and DDM CineTrailer – both of whom are already operating Microsoft hybrid cloud solutions.
Telefónica is the largest telecom company in Spain, and, as of July 2013, will deploy Windows Server Hyper-V and SQL Server, with the goal to virtualize more than 80 percent of its IT services and design for expansion into the Windows Azure platform as needed. The company expects this move to result in a 15% cost savings in the next three to five years while making their business more agile and productive.
DDM is a digital media company from Italy that developed its popular movie-viewing app CineTrailer on Windows Azure. DDM’s customers expect to consume content through a variety of device platforms, and the agency’s previous solution, Amazon Web Services (AWS), could not scale easily enough to ensure all the services across PCs, mobile devices, and connected TVs could be maintained – especially with the application’s rapid growth rate.
These customer stories illustrate that our cloud-first approach is not dependent on something we’re promising out on the horizon – but it is possible with products that are ready right now.
If you’ve followed Microsoft in the news over the last two years, then you know that we are focused on enterprise cloud services. We already operate a vast network of datacenters around the world which include more than 200 cloud services, and, in a post two weeks ago, I shared just a few of the reasons to be very excited about the Microsoft cloud. Each of our breakthroughs enable companies around the world to build a flexible, scalable, dynamic hybrid cloud that can meet demand and support a modern workforce.
The products announced at TechEd North America, and now delivered at TechEd Europe, comprise the components of a Cloud OS that can fundamentally transform the way IT operates. For example:
When we talk about the delivery of our Cloud OS strategy, this is exactly what it looks like. To help you better understand how this comes together, here’s a slide from my keynote presentation that spells out how these new products and services fit into the Cloud OS:
This approach is broad enough to help any IT team address a wide range of situations, and flexible enough to be tailored for any company or industry.
The Microsoft Cloud OS challenges other cloud providers to justify their high costs, time consuming upkeep, and expensive maintenance fees. When you consider that Microsoft alone can provide platform, management, and productivity tools – it’s a very exciting proposition for companies around the world.
As you make your moves to the cloud, I encourage you to choose your cloud provider very carefully. Key considerations during this process should be simplicity and efficiency. Demand that cloud providers demonstrate exactly how they will tie together devices, applications, data and infrastructure – and learn everything you can about how this will enable your business and your employees in the years to come.
I believe that Microsoft’s track record with new enterprise cloud offerings makes it the best equipped to deliver an end-to-end app and IT experience – everything from building the app, deployment, maintenance, and managing every device that uses it. But don’t just take my word for it – compare and contrast against all of the offerings out there to see what Microsoft can do for your business.
To watch my keynote live, follow this link or you can watch it later on-demand here.
This week's episode, we discuss the very popular Service Template for SharePoint 2013 and talk with its creator, Jim Britt.
I first talked about this service template in my keynote at MMS 2013, and I encouraged everyone to read more about it in the "Automation" track of the Building Clouds blog.
The Service Template is also something I covered briefly in a previous post about workloads, where I noted that what makes the Service Template so interesting is that it demonstrates just how much can be done with an on-premise private cloud.
This solution is also usable by hosters who want a process that is repeatable and streamlined for customer use.
You can download the service template here.
Additional benefits & features info about the Service Template:
The new Hyper-V features introduced by Windows Server 2012 were game changing for the IT industry, and the impact has been so positive that, just about every customer I speak with is conducting their own tests on Windows Server 2012 and Hyper-V as their hypervisor.
This is so common that I’ve gotten a lot of questions about how to streamline not just the testing process, but the actual migration process from VMware to Hyper-V. I’ll outline a great option for how to do that in this post. Ultimately, it’s about converting the VMs from the VMware VMDK format to the Hyper-V VHD format.
There are great tools from a number of partners to do this migration (companies like Vision Solutions, Embotics, Racemi, 5Nine, Quest, and NetApp), and last year, Microsoft also released the Microsoft Virtual Machine Converter (MVMC) – a free tool that provides a simple and easy conversion experience from VMware to Hyper-V.
MVMC makes converting a few virtual machines from VMware very easy and has been very successful for people testing out the capabilities of Hyper-V in Windows Server 2012.
Once you are ready to migrate your enterprise from VMware to Hyper-V, you may find the MVMC’s wizard-driven approach is limiting since it can only convert a single machine at a time and it doesn’t support batched jobs. This means using the wizard to perform a migration of the entire virtual infrastructure – and that requires a fair bit of manual effort. Luckily, the MVMC does contain a command line executable that can be run within a PowerShell script or an Orchestrator runbook.
So how do you get started? The first step would be to start writing scripts to automate MVMC, but, as it turns out, we already did this for you. Better yet, we’re releasing it free! Let me introduce you to MAT…
The MVMC Automation Toolkit (MAT) provides a series of sample PowerShell scripts which automate the migration of large numbers of virtual machines using the MVMC.exe as the conversion engine. Since the MVMC.exe doesn’t provide a method for collecting virtual machines from the VMware environment, the MAT will collect all the machines that meet the criteria for conversion.
MAT was designed to be easy to use. Point it at your VMware environment and it will provide you a list of all the machines that can be converted. Next step: Pick a handful to convert and grab some lunch.
When you come back your brand new Hyper-V virtual machines will be waiting.
MAT takes the stress out of the conversion by removing VMware tools, handling the disk geometry conversions, and quickly getting your virtual machines up and running on Hyper-V. And if you have hundreds (or even thousands) of machines to convert, that’s no problem – you can run several MAT servers at once. The multiple MAT servers will automatically coordinate with one another and speed you through the conversion.
MAT uses SQL Express to store the conversion information for each virtual machine so that you can have consistent data during your conversion – whether that takes days, weeks or months. On top of all this, the MAT is written in PowerShell, so it’s easy to understand, and incredibly easy to customize and extend.
The MAT was created by the same team that built the PowerShell Deployment Toolkit (PDT), and the two share much of the same framework. The MAT also borrows from some of the concepts used in the runbooks by another project from that team: Orchestrating Hyper-V Replica with System Center for Planned Failover. (The PDT, by the way, is a great tool that will let you deploy all the System Center 2012 SP1 components quickly and easily.)
If you are preparing to migrate a large number of VMware virtual machines, take a look at MAT first. The Building Clouds Blog is home to several posts about the MAT (specifically the VM Migration track) and will be a continuing source of information about it.
You can download the MAT here.
In this episode, Brad expands on the topics he discussed earlier in the week on the topic of cloud-based corporate device management.
As noted in that previous post, to see some amazing corporate device management demos, take a look at Molly Brown’s demo during my TechEd North America keynote at (skip ahead to 37:50), and also check out the IW demos during the TechEd Foundation Session (starting at about 19:00). During that Foundation Session demo, watch for the work-folder opt-in (24:45), and the iOS opt-in for registration (28:15).
As always, if you have any questions about today’s podcast or suggestions for future topics, let me know!
In a previous post I talked about technical, people, and cloud computing trends in corporate IT. The practical impact of these trends means that the shift to a “bring your own device” (BYOD) or “choose your own device” (CYOD) enterprise workforce is here to stay. These trends place a lot of responsibility on IT departments to support this development – and it’s Microsoft’s job to support every aspect of these trends.
To make the most of this technical shift, IT teams need to do three key things:
Like many of you, I spent last week at Tech Ed North America, and, as I shared in my keynote, we announced a ton of new and exciting capabilities across our products that will enable each and every one of these technical trends.
When I think of Empowering Users, I do so from the mindset and perspective of both the IT team, and the people using these devices. I want the users of corporate devices to have simple, reliable app and data access on any of their devices in any environment. I want the IT teams that support device users to enable this kind of access while managing risk, protecting corporate information, and staying compliant.
So what makes these solutions so important to both users and IT teams?
The importance can be boiled down to one word: Efficiency. These solutions enable IT teams to deliver a pre-configured experience that has been fine tuned to increase workforce productivity – across form factors, infrastructures, and environments.
What enables this type of integration is the IT department’s ability to do three fundamental things: Publish services, set requirements for access to these services, and then provide the means for users to opt in to these services.
A lot of this productivity will go underutilized, however, if it doesn’t extend to the mobile experience. As noted in my earlier post on this BYOD/CYOD topic, the most essential element for mobility is allowing the workforce to use their personal devices for work by offering an “opt in” to IT services while allowing IT to set standards that must be met in exchange for that access. One example of this is Exchange ActiveSync (EAS) which enables devices to synchronize a user’s inbox, calendar, and other items with their Microsoft Exchange Server mailbox, while applying device configuration settings (e.g. requiring a password or PIN to satisfy IT requirements). As you saw at TechEd, we’ve moved well beyond EAS with deeper management capabilities supported directly within Windows, Windows RT, Windows Phone 8, and iOS, while continuing to make it easy for people to connect their devices to corporate resources and install apps. It’s a win-win for device users and IT
Over the next few weeks I’ll discuss in depth the awesome new functionality we’ve built into Windows Server, System Center Configuration Manager, and Windows Intune.
With this thorough integration between Windows Intune and Microsoft System Center 2012 Configuration Manager, I’m excited that IT teams have a powerful single management solution for PC’s and devices. This single solution is an absolutely unmatched source of simplicity and efficiency for enterprise IT pros, and the sheer volume of problems solved by this allow IT teams to focus their time and energy on the core needs of their business.
All of this allows for some major cultural and behavioral changes regarding how device users get data, and how IT pros protect that data. Device users now comply with IT policy in exchange for access to corporate data, and those users self-serve to get the published corporate apps and data their jobs require.
To see these principles in action, check out Molly Brown’s demo during my keynote at TechEd (skip ahead to 37:50), and also watch the IW demos during the TechEd Foundation Session (starting at about 19:00). During that Foundation Session, keep an eye out for the work-folder opt-in (24:45), and the iOS opt-in for registration (28:15).
This week’s podcast is preempted by events at TechEd 2013, but, in case you haven't had a chance to see it yet, my Day 1 keynote is included below in its entirety (including the opening video with Aston Martin!).
Thanks again to everyone who attended this year’s event – and I can’t wait to meet up again at TechEd EMEA!
Earlier this week (both on-stage and on this blog) I commented that “cloud computing is no longer a spectator sport“ and that now, more than ever, there are countless reasons to get excited about what your business can do in the cloud.
Whether you’re looking to dramatically scale, dynamically innovate, or any other combination of superlatives – the cloud is the future of business.
To give you an idea of how far the cloud has come, and to what lengths Microsoft is going to support it, consider these developments:
As we all prepare to head back to our companies and make the most of what we’ve learned from each other at TechEd 2013, I want to conclude with four ideas about where IT teams should seriously consider focusing right away:
With areas of emphasis like these, and the power and scale of a modern datacenter, this is a genuinely limitless opportunity for our industry.
Yesterday was a HUGE day for the IT community, the tech industry, and Microsoft. In case you missed my keynote or any of the other sessions, you can watch them on-demand here (you can also read the keynote transcript).
To get into even more detail, check out the list below with topics and links to yesterday’s big announcements.
[Posted Monday, June 3]
In Satya Nadella’s post from earlier today, he describes Microsoft’s transformation to a “cloud-first” business:
Two years ago we bet our future on the cloud and quietly refocused our 19 billion-dollar software business by completely transforming our products, culture and practices to be cloud-first. We knew the journey would be long and challenging with plenty of doubters. But we forged ahead knowing that the cloud transition would change the face of enterprise computing. […] To enable this transformation we had to make deep changes to our organizational culture, overhauling how we build and deliver products. Every one of our division’s nearly 10,000 people now think and build for the cloud – first.
Two years ago we bet our future on the cloud and quietly refocused our 19 billion-dollar software business by completely transforming our products, culture and practices to be cloud-first. We knew the journey would be long and challenging with plenty of doubters. But we forged ahead knowing that the cloud transition would change the face of enterprise computing. […]
To enable this transformation we had to make deep changes to our organizational culture, overhauling how we build and deliver products. Every one of our division’s nearly 10,000 people now think and build for the cloud – first.
The fruits of this labor will be announced during my keynote today.
Technology leaders love talking about the promise of technologies that are just over the horizon, but Microsoft is now in the unique position of doing much better than that. We are now delivering on our vision with a wave of enterprise products built with this cloud-first approach: Windows Server & System Center 2012 R2 and the update to Windows Intune bring cloud-inspired innovation to the enterprise, and enable hybrid scenarios that cannot be duplicated anywhere in the industry.
With this new wave, our partners and customers can do four key things:
These developments shatter the obstacles which once stood in the way of turning traditional datacenters into modern datacenters, and which inhibited the natural progression to hybrid clouds. These hybrid scenarios are especially exciting – and Microsoft’s comprehensive support for them sets us apart from each and every other competitor in the tech industry.
We deliver that with a Cloud OS approach based on the massively scalable power of Windows Server & System Center which already power thousands of public and private clouds all over the world, the cloud-based management of Windows Intune and the on-premise management of System Center Configuration Manager. When combined, these solutions provide the unified environment that organizations of various size and shape can use to manage each of their corporate-joined devices, as well as the apps they have running across any of their clouds and servers.
There are a lot of companies around the world who have adopted a pragmatic “wait and see” mindset regarding their own move to the cloud. This kind of careful consideration and planning is important, but now is the time to take action. In a sense, the time when the IT industry was a spectator-friendly field has passed. Now that the “wait and see” era has drawn to a close, companies need to make a decision about what cloud model and cloud vendor is best for them.
When you do make this decision, I recommend you keep a few variables in mind: Look for a cloud partner that brings you simplicity and delivers consistency across clouds. This will ensure that you don’t get locked in and that you can maintain VM mobility. Also, insist on a concrete solution that ties together devices, apps, data, and infrastructure. This cloud consistency and concrete approach will provide what you need to enable and support a modern datacenter.
On a daily basis I meet with my engineering teams to ensure that every decision we make and every line of code we write is delivering that simple, unified experience – an experience that guarantees you’ll never have to piece your datacenter together with different platforms and providers. Our goal is the delivery of cohesion, interoperability, and seamless operation. With these new features and capabilities, I truly believe we have met and exceeded this goal – and I hope you do, too.
You can find out more about today’s announcements here, and to help even more customers get started with the cloud, Microsoft is now giving more than half a million MSDN subscribers free, year-round access to 3 new development servers to develop and test new app on Windows Azure.
This is an exciting and transformational time, and I am looking forward to charging into a bigger and better IT future with you.
In this episode, I discuss what to expect from TechEd North America (June 3) and TechEd Europe (June 25), as well as some of the big themes in my Day 1 keynote at both events.
This is a huge event for the tech industry, and I look forward to seeing a lot of friends and partners in both cities! If you haven't registered for TechEd yet, visit the official site here.
Back in mid-April, I discussed the importance of data center high availability, and how the cost of data storage is minimal compared to the cost of not being able to access that data. There are a lot of options for creating a highly available system with disaster recovery and data backup protocols – but many of the current in-market options are expensive, labor intensive, and ineffective.
Typical disaster recovery services, for example, are surprisingly complex and require an array of SAN replicators with symmetric hardware on both sides, and typical recovery times from a secondary backup are far too high for an enterprise organization. (For more on this, read up on Recovery Time Objectives and Recovery Point Objectives)
More and more customers are finding a solution for this by moving DR and backup services to the cloud
The reasons for doing this are simple. Enterprises can now take full advantage of the added functionality and capacity within the Microsoft cloud, while maximizing the ease and cost-efficiency of this move (regardless of the size of the company). This remains true no matter where that secondary site is located – whether it is within your own datacenter, with a service provider, or hosted in Windows Azure, it is always up and always available.
The cloud and its accompanying virtualization solutions offer a huge upgrade for the countless companies that are still using tape, offsite backups, or even warm standby sites. With a cloud-based model, the storage costs are dramatically reduced and – better yet – Azure Storage provides geo replication which creates a replica of your data in a different Azure datacenter. This helps to protect against local disasters and ensure your data is accessible.
I know that a certain bias is implicit considering my role here at Microsoft, so I encourage you to weigh your options and don’t just take my word for it that the Microsoft cloud offers a better option. Regardless of your organization’s IT infrastructure, consider a couple important examples of how Windows Server 2012 and Windows Azure have continuous data availability built into every aspect of their design, with world-class features like:
These features ensure that, even in the face of a disaster, you can work with an incredibly consistent management experience across your clouds. In addition to this consistent user experience, Windows Azure seamlessly interoperates with Windows Server 2012 to act as an extension of the customer’s data (and vice versa) by supporting reliability across VM’s, seamless networking, identity federation, and compute elasticity. Each of these features, in turn, supports a long list of scenarios like failover/failback, item level recovery, workload migration, bi-directional VM mobility, patch validation, and more.
I don’t mean for all of this to start sounding like a pitch from the marketing department, but I do want to paint a picture of a comprehensive and complex in-market disaster recovery solution. I’m extremely proud of what we have been able to deliver for our customers, and I’m confident saying that no other company – or combination of companies – can offer a solution like this. Looking ahead, we are constantly working to develop increasingly streamlined ways to integrate solutions like these – and this integration is a top priority as we continue to refine and innovate these tools.
These products essentially change the way our customers and partners think about (and react to) disaster recovery. It also changes the relative magnitude of one of these events. With the Microsoft cloud, a massive system failure may no longer be a matter of life and death for the business – instead, it may just be a matter of minutes.
In episode 7, I take some extra time to look at Wednesday's "Better Together' post in greater detail.
In particular, we talk about the performance benefits of running Microsoft workloads (Exchange, SharePoint, etc.) on Microsoft platforms (Windows, Windows Server, Hyper-V, System Center, etc.), and how IT teams can combine these tools to maximize the speed, compute, and performance of enterprise workloads.
Got a question for a future episode? Let me know!
I’ve heard IT departments all over the world share a very similar joke about how the rest of the company has no idea where their e-mail comes from or where their SharePoint sites are stored – they just want these things to be there when they need them.
IT departments, however, do know exactly where these e-mails come from and where the SharePoint data resides, and they also know that these apps need to be scalable, high performance, and provide enterprise features and capabilities right out of the box. This fact is something we focus on constantly, and teams throughout Microsoft have spent millions of hours (22.4 million on System Center alone!) building platforms – like Windows Server, Hyper-V, System Center, and SQL Server – that are fine tuned for these workloads. Simply put: Microsoft platforms get the most out of workhorse applications like SQL, SharePoint, and Exchange.
Running Microsoft workloads on non-Microsoft virtualization solutions is a lot like building a house with stone from two different quarries. At the end of the day, you may simply be glad to have a house, but it can be a big gamble to use two unrelated kinds of building material on something so important. If nothing else, you run the risk of your house looking a bit like the Washington Monument.
Without a world-class infrastructure, many users of SQL, SharePoint, Exchange, and other workloads may never see the full spectrum of what these applications can really offer in terms of scalability, compute power, and overall performance.
I’d like to examine each of these three metrics in this post.
When we think about scalability and compute power, Hyper-V has emerged as a particularly dynamic product. By the numbers, Hyper-V represents the single best option for customers looking to virtualize their mission critical, tier-1 applications and workloads at the lowest cost possible.
Consider this: Hyper-V delivers 64 vCPUs and 1TB RAM per VM, with no vSphere-like SKU specific restrictions, and, on top of this, Hyper-V supports double the physical host size of vSphere 5.1. And, on top of all that, if an IT team needs to dramatically scale out their resilient, highly available infrastructure, Hyper-V supports up to 64 physical nodes, and 8,000 VMs per cluster, double that of vSphere 5.1. And this functionality is all available right out of the box.
If you’d like to get deeper into these figures (and I highly recommend it), take the time to read this highly detailed whitepaper.
One graphic I find particularly interesting within the document is this simple comparison:
With these scalability and compute figures in mind, let’s look at the performance metrics from 3rd party testing of Microsoft workloads like SQL and Exchange on our management software.
The performance of Microsoft workloads on Microsoft management technologies is something that has been carefully and exhaustively measured, and the results are pretty surprising.
The 3rd party testing of SQL Server 2012 running on Windows Server 2012 Hyper-V examined an existing SQL Server 2012 OLTP workload that was previously vCPU limited and, in their words, “increased the performance by six times, while the average transaction response times improved by five times.” The tests also discovered that, Hyper-V’s overhead of 6.3% was recorded “when comparing SQL Server 2012 OLTP workload performance of a physical server to a virtual machine configured with the same number of virtual CPU cores and the same amount of RAM.”
(To read a detailed technical analysis of this testing, you can view the full report here. I also recommend the SQL+Windows Server+System Center “better together” overview, and the SQL+Hyper-V best practices documents for additional information.)
Also consider how System Center 2012 SP1 can streamline the deployment and management of SQL Server in a virtualized environment. The benefits include the ability to rapidly deploy new standardized SQL Server virtual machines, support for self-provisioning of new SQL Server virtual machines, the creation of an IaaS platform that can request and provision new clouds or VMs, improved data security, and reduced downtime.
Windows Server 2012’s support for Exchange workloads is another area that was closely examined. In a test that deployed Exchange 2013 on twelve Hyper-V virtual machines running on a single physical server with 48,000 simulated users, we saw these results: “Average database read response times ranged between 5.02 and 15.31 milliseconds, well below the Microsoft recommended limit of 20 milliseconds.”
(To get into the details of these tests, check out the full report here – and also check out the best practices whitepapers about virtualizing Exchange 2013 and Exchange 2010.)
When System Center 2012 SP1 is used to manage Exchange in a virtualized environment, users can more effectively monitor, maintain, and protect Exchange data. These upgrades are accomplished by enabling IT administrators to deploy Windows Server VM’s to host Exchange workloads, creating role-based service access for administrators, as well as the standard System Center support for IaaS, performance insight, data protection, and reduced downtime.
One last note: Testing has just been completed on a single Hyper-V host, running 5 SharePoint virtual machines, scaled to over 1 million users. This test showcases, in great detail, Hyper-V’s ability to drive the highest levels of performance for enterprise workloads. The report on this test is currently being written.
To see what is possible with SharePoint workloads, check out the solution I introduced during my keynote at MMS – the Service Template for SharePoint 2013. This service template has generated a lot of interest about what can be done with an on-premise private cloud – and you can download the service template here. What makes this offering interesting to me is not what it is doing, but how it relates directly back to your own strategies for workload deployments in your Microsoft private cloud. This solution is also usable by hosters who want a process that is repeatable and streamlined for customer use.
With these workloads running at optimal speeds, with optimal compute power, and with optimized performance – we are offering the industry’s most sophisticated tools for managing apps, on-prem and cloud-based infrastructure, as well as PC’s and devices. With Windows, Server, Hyper-V, Windows Azure, and System Center, IT departments with rapidly growing infrastructures and workloads can execute some very important operations: Accelerate deployments, draw out complex insight from their operations, and centrally protect, automate, and efficiently manage key applications and workloads running on the Microsoft Cloud OS platform.
By running Microsoft workloads and platforms, IT administrators are benefiting from the technology and tools that have been meticulously fine-tuned to work together.
In this episode, I take a look at the topic of high availability, back up, and disaster recovery.
These are incredibly important topics for businesses that depend on consistent access to corporate data, and whose customer base insists on avoiding any downtime. Regular readers of the blog may remember a similar topic a couple weeks ago, entitled "Continuously Improving Continuous Availability."
As always, if you have any questions or suggestions for future episodes, let me know!
As many of you already know, at this year’s TechEd we’re going to take the wraps off a range of new products and services, and I am very excited to be delivering the keynote address at both TechEd North America and TechEd Europe. I cannot wait to talk about – and demo! – some of the remarkable new capabilities and functionalities we are going to deliver to the IT industry.
TechEd has historically been one of Microsoft’s most important conferences for IT professionals and enterprise developers – and this year is going to be a genuinely must-see event.
If you haven’t done it yet, take the time to register now for New Orleans or Madrid, or mark your calendars to watch these events live or on-demand. For North America, you can watch my remarks on June 3 live here, and on-demand here. In Europe, you can watch the keynote live on June 25 here, and on-demand on the following Tuesday and Thursday here.
I can’t wait to come back to these two great cities – in particular, my wife and I visited Madrid for a week back in 1998, and it remains one of our all-time favorite vacations. In fact, after living in Puerto Rico for two years in the late ‘80s, I still speak fluent Spanish. Me encanta España!
Also: To better fit the needs of each individual attendee, this year there are nine learning tracks to choose from, covering topics like Data Platform & Business Intelligence, Modern Datacenter, Windows Azure or Windows Client & Access Management. I’ve included below some information provided by the TechEd events team that has been put together to help you make the most of your time in New Orleans or Madrid.
TechEd sold out last year so make sure you register now for New Orleans (June 3-6) or Madrid (June 25-28) to ensure you don’t miss out on everything this event has to offer. See you there!
In this episode I talk about the Hybrid Cloud and CRN's recent piece entitled "Microsoft Leaps Ahead of VMware in Hybrid Cloud Management." You can read my post on this same topic from late last week here.
As always, if you have questions or topics you’d like to hear addressed on an upcoming episode, let me know!