Blog - Title

In the Cloud

  • Recap: Gartner Symposium 2014

    I spent a few days in Orlando this week at the annual Gartner Symposium.  Every year Symposium is a cool event a great opportunity to meet with some of the best and brightest in the tech industry.

  • Big Announcements on the Way!

    Quick update to the Enterprise Mobility community: 

    Over the next couple weeks I’m going to be blogging about some major announcements that will be game-changing in the Enterprise Mobility Management industry.  These announcements are going to raise the expectations enterprises have about what EMM should do for them.

    I’ll cover each of the announcements right here and discuss in detail what each one means, its benefits, and how to get started.

    The first announcement is just a few days away……

  • The Endpoint Zone: Episode 2!

    In Episode 2 of “The Tech Industry’s Official Enterprise Mobility Talk Show” we return to the Channel 9 studios to review the last month in EMM.

    This month The Endpoint Zone examines:

    • How Windows 10 impacts Enterprises and Enterprise Mobility.
    • EMM rankings and Microsoft’s plans for these rankings.
    • Industry efforts to “integrate” with SCCM.
    • My recent trip to speak at the 2014 Gartner Symposium.
    • In an unprecedented (not really) reversal, we interview CITEworld writer Ryan Faas.

    If you have any questions or feedback for next month, let me know!

  • Podcast Thursday: How Machine Learning Makes You More Secure

    In this episode of the ITC Podcast, I look at the topic of cloud-based Machine Learning and the big things it can do to set your mobile strategy apart right now.  I’m joined again by Alex Simons, and this topic really builds upon our discussion last week about cloud-based data protection with Azure RMS.

    To read more about this topic, check out my post New Levels of Security via Machine Learning, and also visit the Machine Learning Blog.

  • The Endpoint Zone, Episode 3: iPad, iPhone, and Android on Office!

    In this month’s installment of the tech industry’s official enterprise mobility talk show©, we review the HUGE announcements from this last month.  HUGE!

    This month The Endpoint Zone examines:

    • GA dates for Intune updates and Office mobile apps + the release date for the very popular Office for iPad apps.
    • Overview of the MDM integration of Intune directly into Office 365.
    • Overview of Office for Android  (read my overview here).
    • Discussion with all-star IT guy Patrick Wirtz from The Walsh Group.

     

  • Recap: IT/Dev Connections Keynote

    I got to spend all Monday at the IT/Dev Connections 2014 conference in Las Vegas, and I had a blast.  It’s a great community event, and I enjoyed seeing a lot of good friends who are doing great work all over the world.

    You can keep track of the conversations going on at the event by following the #devconnections hashtag, and, in the next few days, my keynote will be put up on their site to watch on demand.

  • Announcing the GA of Disaster Recovery to Azure using Azure Site Recovery

    Back in June I announced the Preview of the Disaster Recovery to Azure functionality with Azure Site Recovery (ASR), and its promise to democratize DR is now beginning to have a tangible impact on how our customers perceive cloud-based DR. With ASR, mission-critical workloads – which are often left unprotected due to the high-cost of incumbent business continuity solutions – are now guaranteed resilience, efficiency, and availability by ASR and Microsoft Azure.

    We spent a lot of time thinking and planning precisely what should go into this GA, and here is a bit of insight into our thought process:

    As the productivity and platform company for the mobile-first, cloud-first world, Microsoft is committed to solving the business continuity challenges that CIOs consistently rank as top priorities. Our acquisition of InMage and the integration of InMage Scout with Azure Site Recovery dramatically accelerated our strategy to provide Hybrid Cloud cloud business continuity solutions for any IT environment – whether it’s Windows or Linux, physical or virtualized, or on Hyper-V, VMware or others. Microsoft Azure is now the ideal destination for disaster recovery for enterprise servers around the world.

    Here’s what ASR with support for DR to Azure and InMage integration looks like in action:

    DRtoAzure

    With this GA announcement there are also a few significant additions to the already expansive list of DR to Azure features:

    • Integration with Azure Automation
      Orchestrated recovery is now even more robust and further simplified with the ability to execute Azure Automation runbooks from within ASR Recovery Plans.
    • Initial Replication Progress
      Tracking progress of the initial replication of virtual machine data to customer-owned and managed geo-redundant Azure Storage account is now available within ASR. This new feature is also available when configuring DR between on-premises private clouds.
    • Simplified Setup and Registration
      We have removed the complexity of generating certificates and integrity keys needed to register your on-premises System Center Virtual Machine Manager server with your Site Recovery vault, making getting started with ASR easier than ever before

    In addition to enabling DR to Azure, ASR is also an ideal choice for effectively migrating virtual machines to Azure if you are already virtualized on Windows Server 2012 R2. For our customers who want to migrate their Physical or Pre-Windows Server 2012 R2 virtualized workloads running on any cloud to Azure, we also recently announced the Preview of Migration Accelerator.

    With ASR’s ability to protect, consistently replicate, and failover virtual machines directly to Microsoft Azure, our Enterprise and SME customers are already saving big on CAPEX and OPEX costs that would otherwise be dumped into building, managing, and maintaining expensive secondary datacenters. The net impact on the bottom line of these early adopting enterprises is clear: In the three months that the DR-to-Azure functionality has been in Preview, we’ve seen a 400% increase in VM’s that are protected using ASR, and new customer subscriptions are up 100%.

    I also want to reiterate a few key points from my previous blog that I think are really exciting:

    • While it is very technically sophisticated, ASR is really simple to use and it’s easy to configure and automate the replication of virtual machines based on policies that you control.
    • Because it eliminates the expense of building and maintaining a secondary datacenter for DR, ASR is very cost-effective.
    • ASR is intuitive to use and the self-service model builds on top of existing products, e.g. System Center, Windows Server, and Azure.
    • The benefit of cloud-based extensibility can’t be overstated – this architecture allows for faster development and easy access to new features.
    • ASR offers a consistent user experience across the work you’re doing in any private cloud, via a service provider, or in a public cloud. No matter what, the UX and the functionality are the same.

    Beginning in October, Azure Backup and Site Recovery to Azure will also be available in a convenient (and economical) promotional offer via the Microsoft Enterprise Agreement.  Each unit of the Azure Backup & Site Recovery annual subscription offer covers protection of a single instance to Azure with Site Recovery, and protection of up to 100GB of data with Azure Backup.  You can contact your Microsoft Reseller or Microsoft representative for more information.

    For more information on ASR, check out the recording of the ASR session at TechEd 2014 where we discussed the preview, and also visit the Azure Site Recovery forum on MSDN where you can find additional information and engage with other ASR users.

    Once you’re ready to see what ASR can do for you, you can check out pricing information, sign up for a free trial, or learn more about the product specifications.

  • ITC Podcast, Episode 13: Enterprise Mobility Components

    In episode 13 of the ITC Podcast, I talk about the “Enterprise Mobility Components” section of my ongoing blog series Success with Enterprise Mobility.

    We talk about identity and SaaS management, app/data protection, “Managed Everything” model, and more.  There’s a lot of focus on how these pieces work together, how they can work for you, and the technology behind these solutions.

    These are big topics that are having a really big, really serious impact on organizations all over the world.

    It’s a fun discussion and, if you have questions or comments about this episode, don’t hesitate to get in touch!

  • Taking the Offensive against Malware

    I have led the anti-malware work here at Microsoft for a number of years, and this has been one of the most interesting responsibilities I have ever had.

    The best way to describe this work is hand-to-hand combat with the bad guys. The work never ends and it changes every few minutes. The malware we defend against today is very, very different than what it was in the past.

    I vividly remember when Slammer hit. As fate would have it, I was on Wall Street on that day visiting several large banks. Mid-way through a meeting, several people suddenly excused themselves and walked out. A couple hours later, I learned what was happening and how quickly it had spread through that bank.

    That early kind of Malware was about individuals wanting to make a name for themselves or just trying to cause trouble. Today’s malware is a business – and, all too often, malware is custom built by organized crime or governments who are trying to steal money and secrets.

    One of the things I find most rewarding about working on anti-malware is the opportunity I have to work with great minds across the tech industry. It’s no exaggeration to say that the anti-malware community is one of those places where it is, literally, the good guys vs. the bad guys.

    All of the good guys (the anti-malware vendors) are in constant communication with each other and we all share what we learn in real time. We all have the same common goal of protecting individuals and businesses – and we work very collaboratively to do exactly that.

    If you’re interested in an inside look at how this joint effort works, there are a couple of novels that give very good portrayals of this battle and how it operates. I recommend Worm by Mark Bowden, and any of the books by Mark Russinovich (one of my favorite people here at Microsoft!).

    I am very proud of the work we are have done over the past couple of years with Microsoft Security Essentials and Windows Defender, and I believe that we have the best anti-malware strategy and the right solution for our customers. Of course, the obvious response to a statement like that is, “Brad, what’s the definition of ‘best anti-malware strategy/solution’ that you’re using to benchmark that statement against?” To begin with, the philosophy guiding our anti-malware work has been based on three axis which we use to grade ourselves on the effectiveness of our work:

    1. Do we protect the device and user?
    2. How invasive are we on the device?
    3. Do we have any false positives?

    Do We Protect the Device & User?

    This is the measurement of how effective are we in blocking anti-malware attacks, as well as a measurement of how quickly are we able remove a threat when something does get through.

    Every week, we have an in-depth review of how many devices around the world are being actively protected, and we look at what percentage of them of them were secure and protected during the entire week. This is an area where we do very, very well.

    Here are a couple figures that may be super surprising: Through our telemetry, we receive more than 1 million pieces of malware every single day. This telemetry comes in each day (it runs as an Azure service), and then through machine learning we are able to identify and categorize all of it within minutes.

    Most of the malware that comes in is an evolution of an existing family, and we can automatically recognize and update our signatures as necessary. A lot of this new malware is created by machines and these machines are constantly making small changes – thus, we’re always aiming at a moving target that we could never keep up with on our own. In other words, we fight machines with machines.

    Our machine learning is able to take the 1 million+ pieces of malware that come in every day, categorize and verify that we are already blocking them, and then, when something new is found, it is quickly flagged to humans who dig into the details. The system is really impressive, really fast, and very effective.

    To ensure we stay on top of every piece of malware, we have teams of Microsoft engineers stationed around the globe, and, thanks to the intelligence gathered by our Machine Learning, we are constantly updating our services. For example, we deploy new signatures to our customer three times a day to protect the Windows devices and their users.

    One point of view that we have been communicating across the industry is about the importance of relevance in prioritizing our efforts. By relevance I mean that there are certain families of malware that dominate the ongoing attacks and infections – and eradicating the most prevalent and destructive malware families should be where the anti-malware community focuses its collective efforts. I’ve been urging the organizations that test and rate the industry’s anti-malware solutions to do this, and I was excited recently when one of these testing organizations (AV-Comparatives, based out of Austria) published their latest report with “Relevancy” listed as one of the main criteria. You can read their report here.

    Relevance is also critical because each month, when the security updates are released by Microsoft, we also update a tool called the Malicious Software Removal Tool (MSRT). One of the primary functions of MSRT is to remove the really ugly and tough malware, i.e. things like rootkits. MSRT is executed on nearly 1 billion PC’s around the world every month, and we get telemetry from it that helps us understand what malware is landing on PCs. This helps us prioritize our efforts to eradicate the most dangerous and problematic malware.

    There are over 7,500 malware families, and each month roughly 250 families make up 95% of all malware encounters. Roughly 500 families make up 99% of malware encounters. Like most things in life, a small, concentrated group of malware is responsible for the majority of the attacks. It is also interesting to note that a small set of PCs are constantly infected – the majority of which are running without any anti-malware at all.

    Relevance is a matter of making sure you understand what malware is most common or prevalent and ensuring you are protected against that group. To improve the testing and protection of all the organizations we work with, we have offered all of the anti-malware testing organizations a view of the relevant malware so that they can build their tests in a way that reflects what is actually happening in the world. In stark contrast to this, there have been cases where other testing organizations have published results that look pretty bad until, upon some simple investigation, we see that their reports flag malware that does not appear anywhere within the telemetry we get back from those 1 billion MSRT PC’s.

    How Invasive are we on the Device?

    I’m sure you’ve experienced some sluggish PC performance in the past – only to investigate the issue and find that it’s your anti-malware doing a scan or intercepting calls within the browser. The intention here is good (protecting the user and the device is job #1, after all), but it’s obviously coming at a significant price.

    I can’t tell you the number of times a family member, friend, or neighbor has asked for help with their PC or device (yes, my night job is 1-800-CALL-BRAD Tech Support :)), and, in so many of these cases, I have resolved the issue by “upgrading” them to Window Security Essentials or Windows Defender.

    As we built our anti-malware here at Microsoft, we had, as a first-level requirement, that our solution be non-invasive. The user should never know we are there. As a team, and as an organization, we held ourselves accountable to this design requirement and have built what I believe is the least invasive protection solution that still delivers all the required protection (which, again, is job #1).

    To see the details of this work, I recommend checking out this site which overviews how we measure the most important scenarios in a customer's endpoint experience – and ensures we consistently deliver the best possible protection.

    To see a detailed overview of the work being done by the Microsoft Malware Protection Center, check out this in-depth whitepaper.

    Do we have any False Positives?

    A false positive in anti-malware is when we mistakenly identify a good piece of software as malware and then start removing it from PC’s around the world. Rather than lump this in with the previous section (not impacting the device/user) we expressly called it out as a first-class design principle.

    We have gone to great lengths to ensure we minimize false positives, and you would be amazed to see what our back-end services look like (built in Azure, of course) where we take every signature and run it through automation – including against millions of files that we know are “good” to ensure we minimize those false positives. False positive break customer trust/confidence and they cause significant expense and randomization for our enterprise customers. We believe that we have the lowest rate of false positive in the industry by a very large margin.

    One quick story (trying to keep it real) before moving on: A couple years ago, I was driving into work on a Friday morning when I got a call from the leader of the anti-malware team. His first words were, “Are you sitting down?” That’s never a good way to start a conversation. That morning we had published a signature with a false positive – and that small error promptly began removing an old version of Chrome from PC’s all over the world. Because we are constantly receiving and analyzing so much high quality telemetry, we knew about the false positive within minutes and were able to stop/recall the signature immediately.

    When all was said and done, the signature had removed an old version of Chrome on about 50,000 PC’s – a very small number. That morning ended up being a big learning moment for our organization; it emphasized just how important it is to do execute perfect on every detail when 1,000,000,000 PC’s are counting on the software we create.

    The Whole is More Than the Sum of the Parts

    Every single day the anti-malware team makes decisions about how to constantly improve and strengthen our anti-malware service and the software we deliver. This is a constant and continual battle, and we put a premium on remaining agile and proactive. For me, the greatest validation that we are doing the right thing here is the number of family/friends/neighbors who tell me how much better their PC’s are running after I moved/upgraded them to MSE or Defender – something I’ve done on literally 100’s of PC’s over the years.

    Working Together

    The work that we have done to build System Center Endpoint Protection (SCEP) on top of System Center Configuration Manager (SCCM) is something I recommend learning more about. Every IT organization I know of has pressures to do more with less, and to all of it in a shorter amount of time. One of the most effective ways to decrease costs and accelerate value is to minimize the number of infrastructures that you have to deploy, secure, manage, and update. If you are using SCCM and SCEP, there is only one infrastructure that you have to worry about. Because this is all running on a single infrastructure, the value you are able to extract is significantly easier to obtain, faster, and less expensive.

    For example: You may want to generate a report that shows the devices that were infected over the course of the past month, as well as the user that was associated with that device, and then a view of the compliance of that device’s configuration relative to your organization’s corporate standards. Sounds like a big undertaking, right? If you are using SCCM and SCEP – all of this is in a single database on a single set of objects.

    On the other hand, if you are using SCCM and something other than SCEP for your anti-malware, you’re going to need to do a huge amount of work to pull this information together. The process will go like this: First, you’ll have to create a data warehouse and create the jobs to constantly sync the SCCM and other anti-malware data into the DW. Next, you’ll need to do the work to correlate devices, e.g. help the DW understand that device foo in SCCM is the same device bah in the anti-malware database. After you’re done with this, you can start creating some custom reports. Chances are you will have given up long before getting to this point. On the off chance you do get this set up, have fun holding it together.

    The good news is that we have already done all this work for you, and all of this is stored in a single database, and it all operates on a single set of objects.

    One last note: Many people reading this post purchased SCCM through an Enterprise Agreement with Microsoft – this means you also own SCEP (since it is part of the Enterprise Agreement) and don’t even have to purchase anything. What’s great is that SCEP/MSE/Defender all use the exact same agent on the protected device and the same clouds service. This is an example of how much architecture really matters when it comes to getting the best solution possible.

  • The GA of Azure RemoteApp!

    If you’re like most organizations, you probably have a long inventory of Windows apps you’ve written over many years that your employees use every day.  What is the easiest way for you to deliver these Windows apps to your users for use on all their devices?

    In the past, organizations have built out RDS or Citrix server farms to deliver Windows applications – and many will continue to do this.  However, one thing we have heard many times is that organizations want application remoting solutions – without the up-front CAPEX spend and without having to build for the “peak” usage capacity. 

    This is exactly what we have been working on delivering:  The capabilities of RDS – but as an Azure service.

    To do this we have completely re-written RDS to run as a pure cloud service on the global platform of Microsoft Azure.  Now you can take advantage of the global availability of Azure to deliver all your Windows apps to all your users devices – Windows, iOS, Mac, and Android.  With this set up you only pay for the time/capacity that your users actually use, and you don’t have the up-front CAPEX expenditures (as explained in this video).

    Today we are announcing that Azure RemoteApp is exiting preview status and is now Generally Available – fully supported and available globally.  If you want to learn more about Azure RemoteApp I would encourage you to check out the deep dive session on this from TechEd Europe.

    New features in Azure RemoteApp from the initial trial include:

    • Global availability
    • Full-featured 30-day free trial
    • Support for Office 365 ProPlus
    • New performance and scale options
    • Usage metrics and pricing
    • Supported Clients include Windows Phone 8.1

    To learn about each of these in great detail, check out this post from the Remote Desktop Services team.

    Let me provide a little additional context on what we have delivered and how it is different from other hosted remoting solutions on the market:

    As we have worked with the 1,000’s of organizations that took advantage of preview, we learned that the primary use case of how Azure RemoteApp will be used is in delivering Windows apps to the entire collection of devices users are now relying upon to accomplish work.  Notice that the focus is on delivering apps and not delivering the desktop.  VMware and AWS recently released solutions that focus on delivering the desktop (at a very, very high price).  We learned during the preview that the user of cloud-hosted solutions is focused on the apps and is going to closely mirror the use of remoting solutions on-premises (where more than 80% of the use is delivering apps).  Think about it for a minute:  On an iPad or Android device, what users want to see is the apps they want to use. Delivering a Windows start screen and the entire desktop is not going to be a primary use case. 

    And this decision has a pretty significant impact on the pricing. 

    Azure RemoteApp has two different packaging options.  One option at $10/user/month and another at $15/user/month.  By the time you get the discounts you usually get from Microsoft, many of you will be paying ~$7/user/month for this amazing solution.

    Think about that: You can upload your Windows apps and make them globally available to all your users on all of their devices at a very low price.  In comparison, VMWare and AWS have pricing options that start at $25 and go well above $50/user/month. 

    This kind of functionality is not only backed by Azure’s SLA (and supported by Microsoft Support, offering the full scalability and security of the Azure cloud), but it will also be a focus of continuous improvement as we listen to your feedback and continue innovating based on your needs.  These are really important points to call out.  We back all our service offerings with a financially backed SLA.  If we do not meet our SLA we do not expect you to pay us.  VMware and AWS are not willing to offer this level of guaranteed SLA.  This fact really begs the question of whether or not they understand the needs of the Enterprise and the level of service you require.

    And because we have implemented this as a modern cloud service we will be continually updating the capabilities.

    The post on the Remote Desktop Services blog noted above has a perspective on the value of Azure RemoteApp that I think is pretty insightful for the IT Pros that will be using it, and the CIO’s who need to see the concrete results:

    Using Azure RemoteApp, you can deploy your critical business applications in the cloud; manage them through Azure’s convenient interface; and provide your users an intuitive, high-fidelity, WAN-ready user experience. We are ready to host your Windows applications in production: Deploy them in 13 Azure regions globally and turn them into a finished, turn-key, cloud service accessible to users on any device, anywhere in the world.

    Getting started with Azure RemoteApp is just about as easy as it gets:  To begin, sign in to your Azure management portal and in a matter of minutes (literally!) you can deploy your first RemoteApp collection to users.  For those of you without an Azure subscription, click here for a free one.

    To learn more about Azure RemoteApp before getting started, take a look at our 5-minute hands-on demo. To see how the Internet Explorer team has used Azure RemoteApp, see RemoteIE app.

    One final thing:  I also want to announce that, as of this morning, we have now crossed the 10M download milestone for our Mac, iOS and Android RDS Clients/Apps.  This is an amazing milestone!  Thank you for all the feedback and the incredible positive feedback and ratings in the various app stores.  This massive amount of usage has enabled us to continually improve these clients and you can use them in your enterprise confident in the quality and reliability. 

    This is great technology, built by great people, for great businesses.

     

    You can hear even more the topics in this post in today’s new episode of The Endpoint Zone:

     

  • Data Visualization for IT: System Center Ops Manager Dashboards

    One of the most important things an IT department can do for itself is demonstrate its value to business leaders clearly and regularly. The easiest way to do this is with a dynamic, real-time dashboard view of the IT operation.

    The bad news is that accurate, functional, and responsive dashboards have historically been difficult to produce, as well as lacking the features necessary to show the range of services, solutions, and operations at work behind the scenes. The good news is that System Center Operations Manager 2012 Dashboards is a huge leap forward – and I think it has gone overlooked by a lot of System Center 2012 SP1 users. This is also one of the many reasons to upgrade to SP1 as soon as you can.

    Ops Manager is a technical product and, historically, has been too complex for anyone outside of the IT team to use consistently. In our recent release updates, however, we have created some impressive dashboard functionality which addresses the primary needs for this kind of tool:

    • IT managers and business leaders both need fast access to data about their applications and infrastructure in order to make quick decisions and troubleshoot.
    • To see this data at a glance, users need a one-stop, modular display that can have a customizable level of depth and insight into that data.
    • This modular, customizable view of the data should be able to show any set of metrics, KPI’s, environments, or apps – all without needing to navigate between views.
    • This dashboard needs to be something that can be built and deployed in minutes, and updated or adjusted in seconds.
    • This needs to be easy enough to be used by leaders outside of the IT team, and it needs to be viewable in a multiple formatse.g. via an operations console, a web console, or the common and familiar SharePoint web part.

    If this all seems too good to be true, or if you want to see how this modular, single-pane dashboard really works, I recommend reading more about how to build an Ops Manager dashboard and how to add widgets to it.

    This tool has gotten a lot of traction under the radar with companies around the world, and its users are benefitting from its functionality. One of these users is Stephen Hall from T. Rowe Price:

    “The new dashboards in Operations Manager 2012 SP1 have given some of my older custom management packs new life by dramatically improving the performance of my views and providing better web integration.

    I am very excited to see more attractive custom graphical interfaces being developed, like those in the new SQL Server management pack, that are utilizing the new dashboard capabilities.

    The biggest advantage I’ve recognized in using the new dashboard capabilities is less reliance on the need to build and schedule reports to access data from the data warehouse.”

    Stephen Hall
    Systems Management Technology Architect
    T. Rowe Price

    clip_image002

    Stephen’s experience is echoed by passionate users who are using this tool to graphically demonstrate the operations and performance of their IT infrastructure. If you haven’t used System Center Operations Manager 2012 Dashboards before, there’s no time like the present!

    Another important facet of this dashboard tool is that it is distinctly different from an automated report. The primary difference is that a report is processing data for a specific period of time in the past, whereas a dashboard is projecting and visualizing current and future data. Dashboards can also visualize data (which can be invaluable when IT admins need to manage up), and it also allows data to be seen in the context of other work streams operating concurrently.

    To learn how to get started and use the Ops Manager Dashboards, check out this video from one of our Senior PM’s, Satya Vel.

    Dashboard Images 

  • Table of Contents: What’s New in 2012 R2

    .

    Introduction:  Beginning and Ending with Customer-specific Scenarios

    Blog-Header_Graphic_PCIT

    Blog-Header_Graphic_DataCenter

    Blog-Header_Graphic_ModernApps

  • Europe Trip Recap

    I just returned from Europe where I spent a couple days in Barcelona presenting at the Gartner Symposium, and then a few days in London visiting customers using Microsoft solutions for managing their mobile devices.

    As an engineer, there is nothing quite as rewarding as sitting down with the people who are using the solutions that you build and seeing the impact of the work you’re doing. It is absolutely one of my favorite things to do. In total, these customers were managing 10,000s of mobile devices through SCCM and Intune.

    London included a really unique set of customer visits. Along with the majority of my leadership team, we met with more than 10 customers and had multi-hour, in-depth conversations with them about how they are using our solutions, the challenges they are tackling, and areas where we can improve.

    These meetings included organizations from just about every industry – retail, manufacturing, government, transportation, banking, real estate, and more. The goal of these meetings was to dive into some very specific topics, e.g. places where we felt we could help organizations “get to value” faster. I really appreciate the time each of these organizations was willing to spend with us – we learned a ton.

    When I speak with media and analysts, I often get asked about the common uses of our technology and the most common ways customers are using Microsoft products/services. Here are a couple of commonalities that I saw across all of these customer meetings:

    • Secure e-mail is the most common scenario.
      Most of these customers rolled out Intune to enable secure e-mail on their user’s mobile devices. This was common on both BYO and corporate-procured devices. Every one of these customers is looking forward to the integrated Office and Enterprise Mobility Suite (EMS) capabilities which will start rolling out over the next couple weeks. The integrated solution we are delivering across identity (AAD), productivity (Office), and management (Intune) will be the best experience for this. Period.
    • Single pane of glass for PCs and devices.
      Every single one of these customers has a vision of a single console for managing PCs and mobile devices. Every single one of these customers was also already using SCCM (or was in the process of migrating to SCCM) to take advantage of the hybrid SCCM/Intune capabilities. One of the most impressive meetings was where the resident SCCM guru talked about getting Intune up and running in about an hour and then integrating it with SCCM. This has been foundational to our strategy since day one: Empowering the SCCM administrators to expand their influence and impact. One piece of feedback we received was to provide more documentation on when a customer should go hybrid vs. when they should think about a cloud-only solution.
    • Conditional Access is a killer, killer feature.
      The concept behind conditional access is to set policy that enables access based on a device being compliant with a set of policies. It is an effective carrot and stick that encourages end-users to keep their devices compliant. A great example of this is that most organizations would like to set a policy that e-mail is not allowed to flow to device if the device has been jail broken. What was perhaps most impressive in a number of these conversations was the ingenuity of the IT Professionals who have effectively built their own conditional access with Intune. At the end of each meeting we provided a deep-dive into the SCCM/Intune roadmap and shared some insight into the pre-production Intune console with the conditional access capabilities – as well as how it integrates with Exchange and SharePoint for conditional access to e-mail and files. The feedback on what is coming (soon!) in Intune for conditional access was applause and big smiles. Watch for it.
    • Focus on Enrollment and make sure it as simple as possible.
      A number of the customers walked us through their enrollment processes to bring devices under management. My biggest piece of advice for everyone is to make this as simple as possible. This advice applies just as much (if not more) to everyone here at Microsoft building these experiences. The place we all want to get to is an interface where a user simply inputs their e-mail address and password and then the device is brought under management, the appropriate policies are set, and e-mail/files start to flow. This is the bar to which we all need to aspire. This is yet another place where the kind of integration that we at Microsoft are doing across identity (AAD), productivity (Office), and management (Intune) is set apart from anywhere else in the industry. The upcoming “Architecture Matters” blog series will dive into this in great detail.

    I love these on-site visits. Looking forward to the next set!

  • Use IE on Your Mac, iOS and, Android Devices!

    On Monday we published a preview of the next version of IE in Azure RemoteApp – a service we’ve been working on that rewrites Remote Desktop Services as a 100% cloud service. This means that Azure RemoteApp has essentially infinite scale and runs on the globally available Azure Cloud platform.

    To try it for yourself visit:  https://remote.modern.ie/

    In the 5 short days since its launch, the response has been awesome – there have been 10,000’s of unique users coming up to try the upcoming version of IE. I encourage you to check it out and keep sharing the great feedback.

    Earlier today I was reviewing the telemetry and feedback with the Azure RemoteApp team, and one thing really jumped out at me: By far the bulk of the endpoints being used to access this service are Macs. Pairing this fact with the some of the feedback we’ve received thus far indicates what one of the most common uses of Azure RemoteApp is going to be.

    Just about every enterprise organization has a significant inventory of internal-facing Windows applications that have been built over the past 10+ years, and a significant portion of these are web apps. What we’ve learned over the past four days is that many of these apps are not compatible with Safari. With IE running in on Azure RemoteApp, users of Macs and iOS can now access a broader set of the web apps – apps that are known to be compatible with IE, but not Safari.

    I am curious to learn how interested you would be in this scenario. If using RemoteApp to host IE is an interesting scenario to you, tell us more about how you’d use it and include the hashtag #remoteIE.

    We are really close to announcing the GA and packaging/pricing of Azure RemoteApp – this is going to be exciting!

  • TechNet Radio: Getting Excited about the EMM Webcasts

    Yesterday I spent some time talking with Kevin Remde from TechNet Radio.  You can check out the quick interview below.

    We talked about my in-depth Enterprise Mobility blog series and the new Enterprise Mobility webcast series that kicks off this Tuesday (December 9).

    You can register to see the webcast and take part in the Live Q&A here:  aka.ms/EMMwebcasts

  • Keep Calm and Migrate from VMware Using MAT

    The new Hyper-V features introduced by Windows Server 2012 were game changing for the IT industry, and the impact has been so positive that, just about every customer I speak with is conducting their own tests on Windows Server 2012 and Hyper-V as their hypervisor.

    This is so common that I’ve gotten a lot of questions about how to streamline not just the testing process, but the actual migration process from VMware to Hyper-V. I’ll outline a great option for how to do that in this post. Ultimately, it’s about converting the VMs from the VMware VMDK format to the Hyper-V VHD format.

    There are great tools from a number of partners to do this migration (companies like Vision Solutions, Embotics, Racemi, 5Nine, Quest, and NetApp), and last year, Microsoft also released the Microsoft Virtual Machine Converter (MVMC) – a free tool that provides a simple and easy conversion experience from VMware to Hyper-V.

    MVMC makes converting a few virtual machines from VMware very easy and has been very successful for people testing out the capabilities of Hyper-V in Windows Server 2012.

    Once you are ready to migrate your enterprise from VMware to Hyper-V, you may find the MVMC’s wizard-driven approach is limiting since it can only convert a single machine at a time and it doesn’t support batched jobs. This means using the wizard to perform a migration of the entire virtual infrastructure – and that requires a fair bit of manual effort. Luckily, the MVMC does contain a command line executable that can be run within a PowerShell script or an Orchestrator runbook.

    So how do you get started? The first step would be to start writing scripts to automate MVMC, but, as it turns out, we already did this for you. Better yet, we’re releasing it free! Let me introduce you to MAT

    The MVMC Automation Toolkit (MAT) provides a series of sample PowerShell scripts which automate the migration of large numbers of virtual machines using the MVMC.exe as the conversion engine. Since the MVMC.exe doesn’t provide a method for collecting virtual machines from the VMware environment, the MAT will collect all the machines that meet the criteria for conversion.

    MAT was designed to be easy to use. Point it at your VMware environment and it will provide you a list of all the machines that can be converted. Next step: Pick a handful to convert and grab some lunch.

    When you come back your brand new Hyper-V virtual machines will be waiting.

    MAT takes the stress out of the conversion by removing VMware tools, handling the disk geometry conversions, and quickly getting your virtual machines up and running on Hyper-V. And if you have hundreds (or even thousands) of machines to convert, that’s no problem – you can run several MAT servers at once. The multiple MAT servers will automatically coordinate with one another and speed you through the conversion.

    MAT uses SQL Express to store the conversion information for each virtual machine so that you can have consistent data during your conversion – whether that takes days, weeks or months. On top of all this, the MAT is written in PowerShell, so it’s easy to understand, and incredibly easy to customize and extend.

    The MAT was created by the same team that built the PowerShell Deployment Toolkit (PDT), and the two share much of the same framework. The MAT also borrows from some of the concepts used in the runbooks by another project from that team: Orchestrating Hyper-V Replica with System Center for Planned Failover. (The PDT, by the way, is a great tool that will let you deploy all the System Center 2012 SP1 components quickly and easily.)

    If you are preparing to migrate a large number of VMware virtual machines, take a look at MAT first. The Building Clouds Blog is home to several posts about the MAT (specifically the VM Migration track) and will be a continuing source of information about it.

    You can download the MAT here.

     

    image

  • Recap: Worldwide Partner Conference 2014

    Earlier today I was a part of the Day 1 keynote at Microsoft’s annual Worldwide Partner Conference. This is one of the biggest events of the year for Microsoft, and we covered some HUGE topics today. You can read a recap of the day’s events here.

    In particular during my section of the keynote, we covered three core things:

    1. Microsoft’s layered protection strategy for mobile devices and corporate data.
    2. The practical value and functionality of Office 365 + Enterprise Mobility Suite.
    3. How and why to work like a network.

    These things give our partners and customers the technology they need to succeed in a mobile-first, cloud-first world. Enterprises all over the world are betting on Microsoft – and Microsoft is betting on the enterprise!

    Here’s a quick video recap (and more summary after the jump).

    Layered Protection

    This is a topic that I don’t think can be overemphasized – and it’s something I want every Microsoft partner and customer to take advantage of as soon as possible. By applying specialized and focused layers of protection at every critical phase of data’s path between the corporate network and a mobile device, you can (as noted in previous posts) safeguard what you’ve built and what you manage no matter where the user, the device, or the data travels.

    Security is a make or break feature of any mobility strategy, and your mobile management strategy needs an enterprise-grade standard of protection that accounts for the needs of your device, the app, and the file.

    clip_image002

    With the Enterprise Mobility Suite you are able to deploy a multi-layered protection solution:

    • Layer 1: Protecting at the device (MDM in Intune)
    • Layer 2: Protecting at the app (MAM in Intune)
    • Layer 3: Protecting at the file (Azure RMS)
    • Layer 4: Protecting the identity and corporate access (AAD Premium)

    The EMS allows IT teams to make access controls a native part of files, and it ensures that your mobile devices work flawlessly with apps like Office and Acrobat.

    This layered, comprehensive approach is unique to Microsoft – in part because we’ve made long-term usability a central facet of our enterprise mobility solution, and also because we have the enterprise experience, expertise, and insight that is lacking in other mobility products. We offer a better device management solution, and we offer the only file management and identity management solution.

    Office 365 + Enterprise Mobility Suite

    The widespread adoption of Hybrid Cloud infrastructure means that now many types of organizations can operate at a speed, a scale, and a level of efficiency never before possible. In fact, if you walked the convention floor at TechEd you saw dozens of businesses and apps that only exist and operate in a Hybrid environment.

    Fully Hybrid apps are the kinds of things the IT industry was dreaming about just a couple of years ago. Now, apps that can draw upon machine learning and monitor/adapt based on work behavior are dramatically expanding the level and quality of the services that IT can provide.

    I’ve written a lot about how, in today’s enterprise environment, there simply is not an internet-connected behavior that can’t be accelerated or improved by the cloud. These types of Hybrid apps are showing app creators and end-users the awesome things possible with Hybrid speed, scale, and efficiency.

    EMS and O365 offer enterprises the opportunity to embrace the value, scale, and flexibility of a Hybrid approach to running a business.  It provides end-users with a holistic mobility + productivity solution that is more complete and offers better user experiences. On a practical level, O365 and EMS offer more functionality at a lower cost. This means that IT Pros are now in a position to enable more access to corporate data and more connections. This places the IT teams of any organization in a position of strength: They are able to use data to empower their companies with more complete understandings of their customers, their market, and their own operation.

    clip_image004

    When considered with the scope and dimension it deserves, Enterprise Mobility extends well beyond and MDM and BYOD – and it reaches to the extent of proactively (and securely) governing new apps and services (e.g. SaaS) within an organization. The EMS was assembled precisely to address all of these needs by providing (as referenced above) identity and access management (via AAD Premium), MDM and MAM (via Intune), and data protection (via Azure RMS).

    Work like a Network

    One of the dreams that end users share is the ability to have their work data surfaced to them in a manner that is easy to act upon and personalized. In our personal lives we all use apps that constantly deliver a view of the things that are important to us outside of work, e.g. our hobbies, family events, news from around the world, etc. We use these apps to stay constantly in touch with the things that are most important to us. In most corporations, however, the reality is that information and knowledge flow in a very hierarchal manner – and more often than not this knowledge gets stranded within the hierarchy and/or stranded in unconnected systems with broken communication channels.

    When these this happens innovation and decision making slow down dramatically. When I see research indicating that the average information worker spends 20% of his/her time (i.e. 1 day out of 5 each week!) searching for information, I can’t believe what a terrible waste of time this is for an organization that already has this information stranded somewhere within the organization.

    We need to have the enterprise tools that support a workplace culture which enables us to work like a network – where information flows quickly and without the constraints that exist today. As we work like a network, the information and knowledge that we require in our work lives will surface to us in a personalized and actionable manner – just like it does in our personal digital lives.

    * * *

    There is a lot to be excited about what Microsoft has brought to the table here at WPC, as well as what lies ahead in the very near future. Our comprehensive cloud services – which span infrastructure, productivity, data mobility and managed services – provide the technology you need to really set your organization apart.

    Enjoy the rest of WPC 2014!

  • Podcast Thursday: 3 Reasons IT Has a Very Bright Future

    In this week’s episode (#14), we revisit a topic I wrote about following TechEd North America:  The reasons I believe that the IT industry is changing in ways that really favor IT Pros.

    The discussion in this episode focuses on the positive changes that will effect day-to-day work, career opportunities, and our expectations for technology.  Whether you’re a glass half full or glass half empty type, I think this discussion is a good for everyone to consider.

  • Continually Improving Continuous Availability

    Several weeks ago I met with a customer who expressed a very common problem: “If we can’t access our data at any moment, from any location, we immediately begin losing money.”

    This comment was made before any other topic we discussed that day – before concerns about app management, security, or even storage costs. For many businesses, the cost of data storage is minimal compared to the cost of not being able to access that data.

    Several studies have been done on this very issue, and the results are consistent in showing that close to 40% of companies who lose access to their data for 24 hours or more will be irreparably damaged in the eyes of their customers.  I don’t take figures like these lightly. This is a reality that emphasizes continuous availability as more than just a competitive advantage – it is a table stakes requirement in today’s hyper competitive environment.

    This mindset has been a long time coming – just a few years ago, several hours of data loss or app downtime was acceptable (albeit incredibly unpleasant), but now we’ve advanced to the point where an enterprise environment expects zero to a few minutes of data loss and recovery time. For high traffic or high value data, even this minimal amount of downtime can be a huge problem.

    There are a lot of different solutions geared toward addressing the challenge of continuous availability, but the majority of them are so expensive that their usage is limited to only the mission critical workloads, which leaves much of the IT infrastructure exposed. Other services rely only on data backup as the availability solution, or deploy a combination of HA, DR, and backup. In each of these cases the end result is complex, expensive, and it’s still simply not good enough to satisfy Enterprise-grade availability requirements.

    The lines between HA, backup and DR are getting increasingly blurry, and, to stay ahead of a disaster scenario, it is important for continuous availability to be woven into the fabric of cloud computing. These solutions should offer a range of protection and recovery options that include:  Zero to minimal data loss, recovery times of seconds to minutes (within and across data centers and clouds), and a single management interface to configure, deploy, and manage HA/DR/backup across a hybrid enterprise that spans multiple clouds.

    At Microsoft we have developed and released a number of solutions that are focused on continuous availability, including:

    • Windows Server 2012 allows you to store critical application data (e.g., Hyper-V) on low cost but continuously available SMB3 file shares.
    • The Windows Server 2012 Clustered File Server backed by Storage Spaces delivers a reliable, available, manageable storage platform on cost-effective hardware for various workloads, including Hyper-V, SQL Server, and IW data.
    • Within Windows Server 2012 we also offer Windows Server Backup which enables you to backup your Windows Servers to another Windows Server at intervals as small as every few of minutes. This kind of regular backup means that even if you lose an entire server or disk, your recovered data is never more than a couple minutes old.
    • Hyper-V Replica is another Windows Server 2012 capability that allows you to replicate all your VMs to another server and have a secondary copy that is also never more than a couple minutes old.
    • System Center Data Protection Manager is built on the same capabilities as Windows Server Backup and provides additional command, control, and reporting in a one-to-many environment.
    • Finally, in November we purchased an organization named StorSimple. This company has done some amazing innovation in tiered storage that enables you to create policies on where and how you want to tier storage – locally and into Azure. This solution is incredibly exciting – and in a future post I’ll discuss StorSimple in much greater detail.

    Looking ahead, we have some work to do to more closely align these solutions, but the bottom line is that we offer a number of solutions today that help deliver continuous availability in Windows Server and System Center.

    There are some great customer testimonials about this out there already, and we’re continuing to make this kind of support even more valuable with cost-effective hardware, and by adding automation and orchestration to every step of IT management.

    Focusing on recovery options only after a disaster has occurred is no longer enough. By putting the emphasis on the continuous availability of data and applications – even during a disaster – we are really leaping ahead towards the future of computing.

  • Podcast Thursday: Cloud-based Data Protection

    In this episode of the ITC Podcast, I sit down with Alex Simons the Director of Program Management for Active Directory to talk about cloud-based data protection with Azure RMS.

    I’ve written about RMS in the past (in the posts about App & Data Protection, as well as the Enterprise Mobility Vision post, for example), and I think this topic is really important.  For perspective, consider that we talk a lot about cloud-based management, cloud-based apps, and devices that consume entirely cloud-based content – but one element that is missing there is cloud-based data protection.  That’s where Azure Rights Management steps in – or, as it’s often called, Azure RMS.

    Millions of people are already using Azure RMS without know it because it’s offered as a part of Office 365. Now we’ve even extended RMS for on-prem deployments – and it supports the connection to on-prem deployments of Exchange, SharePoint, and Windows Server.

    If you have questions about RMS or anything about cloud-based security, don’t hesitate to reach out!

  • Using the Cloud to Transform and Unify High Availability, Disaster Recovery, and Data Backup

    Back in mid-April, I discussed the importance of data center high availability, and how the cost of data storage is minimal compared to the cost of not being able to access that data. There are a lot of options for creating a highly available system with disaster recovery and data backup protocols – but many of the current in-market options are expensive, labor intensive, and ineffective.

    Typical disaster recovery services, for example, are surprisingly complex and require an array of SAN replicators with symmetric hardware on both sides, and typical recovery times from a secondary backup are far too high for an enterprise organization. (For more on this, read up on Recovery Time Objectives and Recovery Point Objectives)

    More and more customers are finding a solution for this by moving DR and backup services to the cloud

    The reasons for doing this are simple. Enterprises can now take full advantage of the added functionality and capacity within the Microsoft cloud, while maximizing the ease and cost-efficiency of this move (regardless of the size of the company). This remains true no matter where that secondary site is located – whether it is within your own datacenter, with a service provider, or hosted in Windows Azure, it is always up and always available.

    The cloud and its accompanying virtualization solutions offer a huge upgrade for the countless companies that are still using tape, offsite backups, or even warm standby sites. With a cloud-based model, the storage costs are dramatically reduced and – better yet – Azure Storage provides geo replication which creates a replica of your data in a different Azure datacenter.  This helps to protect against local disasters and ensure your data is accessible.

    I know that a certain bias is implicit considering my role here at Microsoft, so I encourage you to weigh your options and don’t just take my word for it that the Microsoft cloud offers a better option. Regardless of your organization’s IT infrastructure, consider a couple important examples of how Windows Server 2012 and Windows Azure have continuous data availability built into every aspect of their design, with world-class features like:

    These features ensure that, even in the face of a disaster, you can work with an incredibly consistent management experience across your clouds. In addition to this consistent user experience, Windows Azure seamlessly interoperates with Windows Server 2012 to act as an extension of the customer’s data (and vice versa) by supporting reliability across VM’s, seamless networking, identity federation, and compute elasticity. Each of these features, in turn, supports a long list of scenarios like failover/failback, item level recovery, workload migration, bi-directional VM mobility, patch validation, and more.

    I don’t mean for all of this to start sounding like a pitch from the marketing department, but I do want to paint a picture of a comprehensive and complex in-market disaster recovery solution. I’m extremely proud of what we have been able to deliver for our customers, and I’m confident saying that no other company – or combination of companies – can offer a solution like this. Looking ahead, we are constantly working to develop increasingly streamlined ways to integrate solutions like these – and this integration is a top priority as we continue to refine and innovate these tools.

    These products essentially change the way our customers and partners think about (and react to) disaster recovery. It also changes the relative magnitude of one of these events. With the Microsoft cloud, a massive system failure may no longer be a matter of life and death for the business – instead, it may just be a matter of minutes.

  • Nine More Reasons Everyone Should be Excited about the Microsoft Cloud

    Earlier this week (both on-stage and on this blog) I commented that “cloud computing is no longer a spectator sport“ and that now, more than ever, there are countless reasons to get excited about what your business can do in the cloud.

    Whether you’re looking to dramatically scale, dynamically innovate, or any other combination of superlatives – the cloud is the future of business.

    To give you an idea of how far the cloud has come, and to what lengths Microsoft is going to support it, consider these developments:

    • 76% of all enterprise apps now run on the cloud-friendly Windows Server platform.
    • Microsoft recently announced the availability of Windows Azure Infrastructure Services to make moving apps to the cloud simple.
    • This announcement also featured a commitment to match Amazon Web Services prices.
    • We have also recently announced a new $1 billion investment in Windows Azure datacenters around the world – which includes $100 million in Asia.  Microsoft is also the first multinational public cloud provider to offer public cloud capacity in China (through a strategic partner in mainland China).
    • Windows Azure continues to rapidly grow – reporting YoY revenues up 210%, and a subscriber rate of 1,000 new customers every day.
    • The growth of Hyper-V has now reached 3x of VMware’s growth.
    • Microsoft SQL Server is now the most widely used database in the world (46% market share), and has outgrew Oracle by nearly 2x.
    • In terms of productivity, the cloud-based Office 365 is now the faster growing product in Microsoft’s 38 year history.
    • To further remove barriers to enterprise cloud adoption, Microsoft is now giving more than half a million MSDN subscribers free, year-round access to 3 new development servers to develop and test new apps on Windows Azure (and keep in mind the announcement on Monday during my keynote about how to win an Aston Martin from the MSDN team!).

    As we all prepare to head back to our companies and make the most of what we’ve learned from each other at TechEd 2013, I want to conclude with four ideas about where IT teams should seriously consider focusing right away:

    1. Extending the on-premises fabric to meet the cloud.  In other words, use automation and management to create a more resilient fabric in the datacenter, deploy apps to a public cloud, or – better yet – combine these two approaches.
    2. We’ve made it easy for you to tackle big technologies like cloud-integrated storage or software-defined networking.  A unified approach to the datacenter (combined with a rethinking of how you do storage/networking/identity) can make things like multi-tenancy much simpler.
    3. Leverage automation and self-service in order to offer your business an array of options for app deployment that is secure and compliant.
    4. Manage devices where they live.  With tools like Windows Intune you have cloud-based MDM that seamlessly integrates with System Center Configuration Manager.  A great example of this at work are the 35,000 unique customers already using Windows Intune.

    With areas of emphasis like these, and the power and scale of a modern datacenter, this is a genuinely limitless opportunity for our industry.

  • Check Out the New Citrix Service Template for XenDesktop 7.1

    Just a couple years ago, Service Templates were introduced in Microsoft System Center Virtual Machine Manager 2012 (SCVMM).  Service Templates represented a new way to enable complex, multi-VM services that could be quickly, easily, and accurately installed.  To get an idea of just how easy Service Templates can make things, check out this quick overview of the Service Template for SharePoint 2013 developed by some of the writers at the Building Clouds blog.

    If you’re one of the thousands of organizations who want to use XenDesktop 7.1 from Citrix, I’m excited to share some great news:

    Citrix, in collaboration with Microsoft and Dell, has developed a Service Template for XenDesktop 7.1 that removes all the guesswork and manual effort involved in installing a complete XenDesktop implementation. By providing a few configuration options, an XenDesktop administrator is able to install a complete single-site XenDesktop deployment in about an hour, whereas a manual installation could take a day.

    XenDesktop is comprised of multiple VMs running on Microsoft Windows Server and Hyper-V, and can be fully-installed by use of Citrix-developed SCVMM Service Templates, which enables XenDesktop to be easily deployed into a Microsoft private cloud.

    Regular readers of the Server & Cloud blog may recall that the XenDesktop 7.1 release added support for Windows Server 2012 R2 and Windows 8.1 just one week after the general availability for Windows Server.

    To learn more about this new feature, check out the blog post from Citrix announcing the Service Template for XenDesktop 7.1 here.

  • ICYMI: Oracle OpenWorld 2013 Keynote

    Earlier this week I got to deliver a keynote at Oracle OpenWorld 2013.  It was a lot of fun, and pretty exciting to be the first Microsoft exec to talk to this massive and important part of the IT industry.

    In case you missed it, you can check out the full keynote here, or the highlights below.

    Thanks again to Larry and the Oracle team for the invite!

  • SAP, HP and Microsoft Set New SAP World Record Using Hyper-V

    Back in May, I discussed how technologies such as Windows Server 2012, Hyper-V, and System Center 2012 SP1, provide the most scalable, reliable, and feature-rich platform to run key, tier-1 workloads like SQL Server, SharePoint and Exchange, at the lowest cost.

    To help customers virtualize these workloads, we’ve recently published a number of best practice whitepapers for the virtualization and management of SQL Server, SharePoint and Exchange, and we’ve also shared some phenomenal performance testing results which underscore that the Microsoft platform is unequivocally the best platform for virtualizing tier-1 workloads.

    But I’m realistic – I understand that there are organizations who also run other tier-1 applications within their environments, and Microsoft wants to ensure that our customers can virtualize those other workloads with the same confidence they have when virtualizing Microsoft workloads.

    One of the most common workloads within enterprise environments is SAP Enterprise Resource Planning (ERP), which is a solution that provides access to critical data, applications, and analytical tools, and it helps organizations streamline processes across procurement, manufacturing, service, sales, finance, and HR. For a demanding workload like SAP ERP, many of our customers assume that they will need to run the solution on physical servers – and this assumption is backed up by the large number of existing SAP benchmarks which highlight the huge scale and performance on a physical platform.

    So what does that mean for customers who want to virtualize SAP ERP? Can it be virtualized successfully and deliver the necessary levels of performance required for tier-1 applications?

    The answer is unequivocally, yes.

    I’m proud to announce that, on June 24th, 2013, through a close collaboration between SAP, HP and Microsoft, a new world record was achieved and certified by SAP for a three-tier SAP Sales and Distribution (SD) standard application benchmark, running on a set of 2-processor physical servers.

    The application benchmark resulted in 42,400 SAP SD benchmark users, 231,580 SAPS, and a response time of 0.99 seconds, showcasing phenomenal performance using a DBMS server with just 2 physical processors of 16 cores and 32 CPU threads.

    The best part? Not only was SAP ERP 6.0 (with Enhancement Package 5) running on SQL Server 2012, on Windows Server 2012 Datacenter, but the configuration was completely virtualized on Hyper-V. In addition, this is the first SAP benchmark with virtual machines configured with 32 virtual processors, and subsequently, the first with SQL Server running in a 32-way virtual machine. The result is also more than 30% higher than a previous 2-processor/12 cores/24 CPU threads, virtualized configuration running on VMware vSphere 5.0.

    It’s clear from this benchmark that with the massive scalability and enterprise features in Windows Server 2012 Hyper-V, along with HP’s ProLiant BL460c Gen8 servers, 3PAR StoreServ Storage and Virtual Connect networking capabilities, customers can virtualize their mission critical, tier-1 SAP ERP solution with confidence.

    You can find the full details of the benchmark on the SAP Benchmark Site, and you can also read more information about running SAP on Windows Server, Hyper-V & SQL Server, over on the SAP on SQL Server Blog.

    For more details visit: http://www.sap.com/benchmark

     

     

    Note:

    Benchmark performed in Houston, TX, USA on June 8, 2013. Results achieved 42,400 SAP Standard SD benchmark users, 231,580 SAPS and a response time of 0.99 seconds in a SAP three-tier configuration SAP EHP 5 for SAP ERP 6.0. Servers used for Application servers: 12 x ProLiant BL460c Gen8 with Intel Xeon E5-2680 @ 2.70GHz (2 processors/16 cores/32 threads) and 256GB using Microsoft Windows Server 2012 Datacenter on Windows Server 2012 Hyper-V. DBMS Server: 1 x ProLiant BL460c Gen8 with Intel Xeon E5-2680 @ 2.70GHz (2 processors/16 cores/32 threads) and 256GB using Microsoft Windows Server 2012 Datacenter on Windows Server 2012 Hyper-V using Microsoft SQL Server 2012 Enterprise Edition

    VMWare ESX 5.0 based benchmark performed in Houston, TX, USA on October 11, 2011. Results achieved 32,125 SAP Standard SD benchmark users, 175,320 SAPS and a response time of 0.99 seconds in a SAP three-tier configuration SAP EHP 4 for SAP ERP 6.0. Servers used for Application servers: 10 x ProLiant BL460c G7 with Intel Xeon X5675 @ 3.06GHz (2 processors/12 cores/24 threads) and 96 GB using Microsoft Windows Server 2008 Enterprise on VMWare ESX 5.0. DBMS Server: 1 x ProLiant BL460c G7 with Intel Xeon X5675 @ 3.06GHz (2 processors/12 cores/24 threads) and 96 GB using Microsoft Windows Server 2008 Enterprise on VMWare ESX 5.0 using Microsoft SQL Server 2008 Enterprise Edition.