It's all about Microsoft Infrastructure...

here you can find information about Virtualization, System Center, Unified Messaging, Directory Services, Deployment, MS Certification and much more...

October, 2011

  • Virtualizing SQL

    Running SQL Server with Hyper-V Dynamic Memory - Best Practices and Considerations

    http://sqlcat.com/sqlcat/b/whitepapers/archive/2011/08/02/running-sql-server-with-hyper-v-dynamic-memory-best-practices-and-considerations.aspx

     

    Onboarding SQL Server Private Cloud Environment

    http://sqlcat.com/sqlcat/b/whitepapers/archive/2011/03/22/onboarding-sql-server-private-cloud-environment.aspx

     

    High Performance SQL Server Workloads on Hyper-V

    http://sqlcat.com/sqlcat/b/whitepapers/archive/2010/05/27/high-performance-sql-server-workloads-on-hyper-v.aspx

  • WINDOWS SERVER 8 HYPER-V OVERVIEW

    In case you see didn’t this blog post

    http://blogs.technet.com/b/server-cloud/archive/2011/10/11/windows-server-8-hyper-v-overview.aspx

  • Demystifying Exchange 2010 SP1 Virtualization

    Important information from Exchange Team

    http://blogs.technet.com/b/exchange/archive/2011/10/11/demystifying-exchange-2010-sp1-virtualization.aspx

     

    It’s been a few months since we announced some major changes to our virtualization support statements for Exchange 2010 (see Announcing Enhanced Hardware Virtualization Support for Exchange 2010). Over that time, I’ve received quite a few excellent questions about particular deployment scenarios and how the changes to our support statements might affect those deployments. Given the volume of questions, it seemed like an excellent time to post some additional information and clarification.

    First of all, a bit of background. When we made the changes to our support statements, the primary thing we wanted to ensure was that our customers wouldn’t get into a state where Exchange service availability might be reduced as a result of using a virtualized deployment. To put it another way, we wanted to make sure that the high level of availability that can be achieved with a physical deployment of the Exchange 2010 product would not in any way be reduced by deploying on a virtualization platform. Of course, we also wanted to ensure that the product remained functional and that we verified that the additional functionality provided by the virtualization stack would not provide an opportunity for loss of any Exchange data during normal operation.

    Given these points, here’s a quick overview of what we changed and what it really means.

    With Exchange 2010 SP1 (or later) deployed:
    • All Exchange 2010 server roles, including Unified Messaging, are supported in a virtual machine.
    • Unified Messaging virtual machines have the following special requirements:
      • Four virtual processors are required for the virtual machine. Memory should be sized using standard best practices guidance.
      • Four physical processor cores are available for use at all times by each Unified Messaging role virtual machine. This requirement means that no processor oversubscription can be in use. This requirement affects the ability of the Unified Messaging role virtual machine to utilize physical processor resources.
    • Exchange server virtual machines (including Exchange Mailbox virtual machines that are part of a DAG), may be combined with host-based failover clustering and migration technology, as long as the virtual machines are configured such that they will not save and restore state on disk when moved, or taken offline. All failover activity must result in a cold boot when the virtual machine is activated on the target node. All planned migration must either result in shutdown and cold boot, or an online migration that makes use of a technology like Hyper-V Live Migration. Hypervisor migration of virtual machines is supported by the hypervisor vendor; therefore, you must ensure that your hypervisor vendor has tested and supports migration of Exchange virtual machines. Microsoft supports Hyper-V Live Migration of these virtual machines.

    Let’s go over some definitions to make sure we are all thinking about the terms in those support statements in the same way.

    • Cold boot This refers to the action of bringing up a system from a power-off state into a clean start of the operating system. No operating system state has been persisted in this case.
    • Saved state When a virtual machine is powered off, hypervisors typically have the ability to save the state of the virtual machine at that point in time so that when the machine is powered back on it will return to that state rather than going through a “cold boot” startup. “Saved state” would be the result of a “Save” operation in Hyper-V.
    • Planned migration When a system administrator initiates the move of a virtual machine from one hypervisor host to another we call this a planned migration. This could be a single migration, or a system admin could configure some automation that is responsible for moving the virtual machine on a timed basis or as a result of some other event that occurs in the system other than hardware or software failure. The key point here is that the Exchange virtual machine is operating normally and needs to be relocated for some reason – this can be done via a technology like Live Migration or vMotion. If the Exchange virtual machine or the hypervisor host where the VM is located experiences some sort of failure condition, then the result of that would not be “planned”.
    Virtualizing Unified Messaging Servers

    One of the changes made was the addition of support for the Unified Messaging role on Hyper-V and other supported hypervisors. As I mentioned at the beginning of this article, we did want to ensure that any changes we made to our support statement resulted in the product remaining fully functional and providing the best possible service to our users. As such, we require Exchange Server 2010 SP1 to be deployed for UM support. The reason for this is quite straightforward. The UM role is dependent on a media component provided by the Microsoft Lync team. Our partners in Lync did some work prior to the release of Exchange 2010 SP1 to enable high quality real-time audio processing in a virtual deployment, and in the SP1 release of Exchange 2010 we integrated those changes into the UM role. Once that was accomplished, we did some additional testing to ensure that user experience would be as optimal as possible and modified our support statement.

    As you’ll notice, we do have specific requirements around CPU configuration for virtual machines (and hypervisor host machines) where UM is being run. This is additional insurance against poor user experience (which would show up as poor voice quality).

    Host-based Failover Clustering & Migration

    Much of the confusion around the changed support statement stems from the details on combination of host-based failover clustering & migration technology with Exchange 2010 DAGs). The guidance here is really quite simple.

    • First, let’s talk about whether we support third-party migration technology (like VMware’s vMotion). Microsoft can’t make “support” statements for the integration of 3rd-party hypervisor products using these technologies with Exchange 2010, as these technologies are not part of the Server Virtualization Validation Program (SVVP) which covers the other aspects of our support for 3rd-party hypervisors. We make a generic statement here about support, but in addition you need to ensure that your hypervisor vendor supports the combination of their migration/clustering technology with Exchange 2010. To put it as simply as possible: if your hypervisor vendor supports their migration technology with Exchange 2010, then we support Exchange 2010 with their migration technology.

    • Second, let’s talk about how we define host-based failover clustering. This refers to any sort of technology that provides automatic ability to react to host-level failures and start affected VMs on alternate servers. Use of this technology is absolutely supported within the provided support statement given that in a failure scenario, the VM will be coming up from a cold boot on the alternate host. We want to ensure that the VM will never come up from saved state that is persisted on disk, as it will be “stale” relative to the rest of the DAG members.

    • Third, when it comes to migration technology in the support statement, we are talking about any sort of technology that allows a planned move of a VM from one host machine to another. Additionally, this could be an automated move that occurs as part of resource load balancing (but is not related to a failure in the system). Migrations are absolutely supported as long as the VMs never come up from saved state that is persisted on disk. This means that technology that moves a VM by transporting the state and VM memory over the network with no perceived downtime are supported for use with Exchange 2010. Note that a 3rd-party hypervisor vendor must provide support for the migration technology, while Microsoft will provide support for Exchange when used in this configuration. In the case of Microsoft Hyper-V, this would mean that Live Migration is supported, but Quick Migration is not.

    With Hyper-V, it’s important to be aware that the default behavior when selecting the “Move” operation on a VM is actually to perform a Quick Migration. To stay in a supported state with Exchange 2010 SP1 DAG members, it’s critical that you adjust this behavior as shown in the VM settings below (the settings displayed here represent how you should deploy with Hyper-V):

    Figure 1:
    Figure 1: The correct Hyper-V virtual machine behavior for Database Availability Group members

    Let’s review. In Hyper-V, Live Migration is supported for DAG members, but Quick Migration is not. Visually, this means that this is supported:

    Screenshot: Live Migration of Database Availability Group in Hyper-V
    Figure 2: Live Migration of Database Availability Group member in Hyper-V is supported (see large screenshot)

    And this is not supported:

    Screenshot: Quick Migration of Database Availability Group in Hyper-V
    Figure 3: Quick Migration of Database Availability Group members is not supported

    Hopefully this helps to clarify our support statement and guidance for the SP1 changes

  • Free Training: Building the Private Cloud


    Free one-day online conference for developers and administrators of cross-platform Cloud products and technologies.

    Thursday, November 10, 2011 | 11:00 am - 4:00 pm Eastern

    Join Cloud IT Pro for Building the Private Cloud, this conference brings industry experts straight to your computer, offering free, technical Cloud training.
    Attendees will have the opportunity to ask live questions via our platform’s interface, network with other attendees, and chat directly with vendors in the Exhibit Hall. Various benefits also include prize giveaways, announcements, and more!  

    You'll be able to join experts Mel Beckman, Michael Otey, Sean Deuby and others for a series of free technical sessions for developers and administrators that will help you get the most value from the Cloud.

    Building the Private Cloud provides everything that an in-person, peer-to-peer-to-expert event offers -- without the travel, without the cost, and without the time away from your desk!

    Register Now
    [Click Here to Register]

  • Forrester study: ROI of Office 365

    As with any critical business decision, organizations considering Office 365 should first ask the question: Is it really worth it?

    A recent study conducted by Forrester Consulting answers that question with a resounding YES. The study, which was commissioned by Microsoft and analyzes the total economic impact of Office 365 on midsize businesses, found that Office 365 delivers an ROI of 321 percent with a payback period of two months for the composite midsize organization. Now that’s value!

    To complete its analysis, Forrester interviewed seven mid-sized businesses and then created a composite midsize organization based on the experience of these organizations. The composite organization had 150 employees, with some mobile workers, others at corporate headquarters and another group in global branch offices. The June 2011 study found that the three-year risk-adjusted benefits of Office 365 add up to $1.17 million with a payback period of two months. With costs of $277,000, the net present value of Office 365 is $890,000.

    The study lists the non-risk adjusted benefits a midsize organization can expect to see, attaching a monetary value to each, where applicable. Here are a few that were on the list:

    1. Knowledge worker productivity gain: $657,000 across all employees in the organization
    2. Mobile worker incremental productivity gain: $168,750 over three years
    3. Eliminated hardware: Savings of $64,000 over three years
    4. Eliminated third-party software: Savings of $10,000 over three years
    5. Web conferencing savings: Savings of $25,000 over the life of the study
    6. Substituted Microsoft licenses: Savings of $125,000 in the initial period of the study
    7. Avoided on-premises planning and implementation labor: Savings of $35,000 in internal labor and professional services costs
    8. Reduced IT support effort: Savings of $206,350 over three years
    9. Reduced travel costs and corresponding CO2 emissions: Savings of $260,625 over the life of the study and a reduction of 47,000 kg of CO2 emissions from air travel

    Of all the benefits Office 365 delivers, the primary reason the interviewed companies cited for implementing the service was reducing their total cost of ownership (TCO) for productivity and collaboration tools. Says one study participant: “The cost savings that we see from a cloud-based solution are reason enough to choose Office 365. It saves the company money and allows our IT staff to work on business problems that add more value to the company.”

    To learn more, please check out the full Forrester study.

  • MORE QUALITY ISSUES FOR VMWARE...

    Virtualization Nation,

    Shortly on the heels of their latest vSphere 5.0 release, VMware has already recalled one of their new features, telling people to disable it manually and developing a patch to disable the feature. This is the latest in a long string of QA SNAFUs for VMware. In this latest incident, VMware enables a new feature that causes the following symptoms:

    • When performing a Storage vMotion or a Virtual Machine Snapshot you experience poor system performance.
    • A Storage vMotion or Virtual Machine Snapshot fails or times out

    Yes, that’s right. Your Storage VMotion or VM Snapshot may just FAIL.

    I’ve included the full VMware KB below and a blog post covering the issue as well.

    VMware KB:

    http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2007427

    Blog (boche.net):

    http://www.boche.net/blog/index.php/2011/09/30/vmware-issues-recall-on-new-vsphere-5-0-unmap-feature/

  • Exchange - Future of /Hosting Mode

     

    http://blogs.technet.com/b/exchange/archive/2011/10/13/future-of-hosting-mode.aspx

    With the release of Exchange 2010 SP1, we introduced the /hosting mode switch – a feature which deploys Exchange using an Active Directory structure that affords complete separation between tenant organizations shared on the same underlying platform. /Hosting mode makes the need for a solution like Hosted Messaging and Collaboration largely redundant when hosting multi-tenant Exchange. /Hosting mode does not ship with any automation tools necessary for hosters to operate a service at scale, but it does address the requirements typical of a multi-tenant infrastructure (such as tenant organizations and service plans).

    On one hand, /hosting mode solves many challenges inherent to offering this type of service. On the other hand, /hosting mode offers a reduced feature set as compared to the typical on-premises configuration. In the time since we released SP1 and /hosting mode, we have heard from customers and partners alike that many of these features are a fundamental requirement for doing business, and so to this end I announced that hosters would be supported when using the on-premises configuration from SP2 onwards (assuming their configuration meets certain design requirements). In addition, we also provided perspective on which is the right approach for hosters to take.

    The purpose of this blog post is to explain the next step in the evolution of our thinking regarding /hosting mode. After hearing feedback about the importance of these features, we have concluded that the best approach to multi-tenant hosting on Exchange is to use the on-premises configuration as the basis for a hosting infrastructure. As such, no additional features will be added to /hosting mode, and it will not be carried forward into the next version of Exchange. Here are a few key facts you’ll need to know:

    • /hosting mode will be supported through the standard support lifecycle for Exchange 2010. It will still be available in SP2 and any future service packs or roll-ups. No additional functionality or features will be added to /hosting mode, however, and we don’t recommend using /hosting mode going forward due to its reduced feature set and the fact that it will add complexity to future upgrades.
    • Multi-tenant hosting on the next version of Exchange will be supported, in a similar fashion to the approach we will take with Exchange 2010 SP2.
    • Migrating from Exchange 2010 /hosting mode to the on-premises configuration of Exchange (2010 or future versions) will require deployment into a separate forest.
    • Microsoft will publish guidelines for hosting a multitenant environment using the on-premises configuration. Microsoft will also publish a step-by-step process for upgrading from Exchange 2007 HMC or migrating from Exchange 2010 SP1 /hosting to Exchange 2010 SP2 using the on-premises configuration.
    • Hosting automation tools and control panel solutions will be provided by our hosting ISV partners. We are working closely with them to ensure their solutions meet our hosting guidelines (and will therefore be supported).

    While this represents a change in direction, and will no doubt result in varying amounts of migration work on the part of hosters who have deployed /hosting mode, there is good news. This new approach means that hosters will be able to offer a wider set of hosted Exchange features to their customers. In addition, integrating Lync into a hosted service portfolio will be more streamlined and much simpler.

    Looking at the data we presented at Worldwide Partner Conference, this opens up a sizeable market opportunity for hosters. Most customers looking to upgrade to Exchange 2010 cite the product’s advanced capabilities (such as Exchange UM – not available in /hosting mode) and a hosted UC service as a primary driver.

    The obvious question you’re probably asking yourself is “what should I do now?” If you haven’t yet deployed Exchange 2010, our recommendation is to avoid /hosting mode and go directly to Exchange 2010 SP2 using the on-premises configuration. This will allow you to offer the features customers are looking for, and avoid a cross forest migration down the road.

    If you’ve already deployed /hosting mode, you will continue to be fully supported through the standard support lifecycle of Exchange 2010, though you will continue to have a reduced set of features at your disposal. In addition, if you’re planning on hosting Lync and have deployed Exchange using /hosting mode, you will need to deploy Lync in a separate forest. You might consider switching to Exchange 2010 SP2 using the on-premises configuration.  If neither of these issues are considerations, stay the course until the next version of Exchange is available.

    As I mentioned above, documentation for hosting as well as a step-by-step process for both scenarios will be forthcoming from my team in the coming months.

    We appreciate the challenges involved with this decision are considerable, but we do believe this is the best, most flexible course of action available for our service provider partner community going forward. We will provide more information and details in the coming months, but wanted to be clear about this directional change as you make plans for your infrastructure today.

  • Microsoft’s Data Center Takes Fresh Approach On Water Reuse

     

    http://blogs.technet.com/b/msdatacenters/archive/2011/10/13/microsoft-s-data-center-takes-fresh-approach-on-water-reuse.aspx

     

    From: Christian Belady, General Manager of Data Center Advanced Development

    Around the globe, water is becoming a scarcer and more valuable commodity, and that’s an important factor for data center operators and cloud service providers to consider as consumers and businesses aggressively adopt cloud-based computing. It’s even more critical that all of us in the industry make sure that beyond building sustainability into our designs, running data centers to higher standardize efficiencies, and measuring impact constantly, that we are helping the industry at large in thinking out of the box.

    Today offers one of those opportunities. In Quincy, Washington, we are taking steps to transfer the operations of our Water Treatment Plant, located on our data center site, to the City of Quincy. This project involves innovative agreements for promoting a long term sustainable use of a limited natural resource, water, in a desert area that has the added benefit of supporting the foundation of Quincy and Grant County’s growing economy for years to come. To my knowledge, it is the first known transfer of a water treatment plant to a municipality in our industry and I would like to share why I think this type of collaborative project helps the industry and environment benefit as a whole.


    Microsoft’s Quincy, Washington Water Treatment Plant

    Microsoft’s current water treatment plant extracts the minerals from the potable water supply prior to using that water in cooling our local 500,000 square foot data center. (Note: our new modular Quincy data center that went live in January 2011 uses airside economization for cooling and substantially less water).

    Moving forward, the City of Quincy will lease the water treatment plant from Microsoft for $10 annually and will provide the company with reduced water rates, with an option to buy it after 30 years. The plant will be operated, maintained and managed by the city with a right to purchase the plant after that time.  By loaning these assets to the City, they were able to save significant construction costs for the new Reuse System.

    The strategic location of the water treatment plant will also benefit other local businesses and industrial users, such as other data centers, food storage processing companies, etc. The City of Quincy plans to retrofit the plant as an expanded industrial reuse system in two phases. Following the first phase, the system will generate approximately 400,000 gallons per day (150 million gallons per year) using food processor wastewater effluent. The second phase upgrade is projected to produce 2.5 to 3.0 million gallons per day (1 billion gallons per year), with about 20 percent being used by local industries and the remaining being used to recharge the aquifer around Quincy.

    This collaborative partnership with the City of Quincy solves a local sustainability need by taking a fresh look at integrating our available resources and allows Microsoft to focus on its core expertise in meeting the needs of its customers who use our Online, Live and Cloud Services such as Hotmail, Bing, BPOS, Office 365, Windows Live, Xbox Live and the Windows Azure platform.

    As I have said before, we will continue to look for ways to reduce and eventually eliminate the use of resources, including water, in our data center designs. Today, this project reflects our firm commitment to that vision. I hope it also sets the stage for a healthy discussion within our industry to continue to explore ways to share our investments and best practices within the industry and regions in which we do business.

    For more information on our Quincy, WA data center please watch our data center tour video.

  • US government heavily build on Cloud Services

    http://www.informationweek.com/news/government/cloud-saas/231900355

    The Department of Homeland Security has committed to 12 cloud service offerings, including nine private cloud offerings and three public cloud offerings, as part of its commitment to the White House's "Cloud First" policy, DHS CIO Richard Spires told a House cybersecurity subcommittee on Thursday.

    DHS' plans go beyond White House requirements that each federal agency move three services to the cloud by mid-2012. Several of the cloud services are already in place, including enterprise content delivery, an identity proofing service, and authentication services from the agency's private cloud.

    Included among the new services DHS plans to implement in the cloud are services for collaboration, software development, infrastructure, project management, and business intelligence.

    Email and SharePoint: DHS plans to deliver email and SharePoint from its own private cloud. By the end of this calendar year, 90,000 DHS users will be on the SharePoint service, and by the end of next fiscal year, 100,000 users will be on the email service.

    Development and Test: By moving its disparate dev and test systems into a private cloud, DHS anticipates being able to greatly accelerate the delivery of new software and services. Server provisioning time should decrease from six months to a day, and the service will include on-demand testing and app management.

    Infrastructure as a Service: By the end of this calendar year, DHS plans to implement an IaaS service that includes virtual production systems, network, and storage and that's designed with industry standards in mind.

    WorkPlace as a Service: Within the next 24 months, DHS plans to craft a service to allow virtual desktop and remote access for employees across the country. The hope is that this will enable DHS to decrease its long-term spending on desktops and laptops.

    Project Server: Within the next 30 days, DHS will launch Microsoft's Project Server as service. The goal is to standardize project management efforts across the agency and to make it easier to publish and share project schedules across the agency's different groups and bureaus.

    Case and Relationship Management as a Service: DHS plans to offer case and relationship management as a service dovetail well with similar plans to offer Project Server as a service. Both efforts are targeted at standardizing business processes across the agency. In addition, the case and relationship management offering will leverage enterprise license agreements to cut licensing costs.

    Business Intelligence: There's already a pilot business intelligence project in place at DHS, which looks to fill a void and provide the agency with a more complete view of its programs and expenditures. By fiscal 2013, the agency plans to turn this effort into a full-fledged private cloud service.

    Web Content Management as a Service: DHS' experience hosting RestoretheGulf.gov in the public cloud last year led this year to the award of a public cloud hosting contract. Within six months, DHS will begin piloting public hosting with websites from Immigration and Customs Enforcement, Citizenship and Immigration Services, and the Federal Emergency Management Agency.

  • Optimizing Branch Office Servers by Using Windows Server® 2008 R2

    Microsoft IT Showcase is pleased to announce the publication of Optimizing Branch Office Servers by Using Windows Server® 2008 R2 with Hyper-V which discusses how Microsoft IT uses Windows Server 2008 R2 with Hyper-V to provide a secure, future-proof platform on which to provide key services to users. 

    Microsoft Information Technology (IT) used Windows Server 2008 R2 with Hyper-V to virtualize all of the enterprise's branch-office servers. This enabled Microsoft IT to extend virtualization's benefits to a variety of site types, from small offices to large, complex branches, thereby improving services, reducing costs, and expanding infrastructure-consolidation efforts.
    Article | Technical Case Study

  • Get the most out of your Private Cloud with guidance from the Solution Accelerators Team

    NEW! Service Management for the Private Cloud!

    This white paper will help you take advantage of service management principles to maximize the benefits of the private cloud. Service level management is more important than ever because of the private cloud's emphasis on self-service, and the interdependency of its components. Service catalogs also play a big role in a cloud environment because of the importance of letting users know what is available, at what costs, and at what service levels. Apply service management to get several benefits out of the private cloud, such as elasticity, scalability, automation, and reduced time to market.

    Assess your client environment for Office 365 readiness

    If you are considering a move to the cloud with Microsoft's award-winning business productivity solutions, MAP 6.0 can help make your planning process easier and faster. MAP 6.0 includes an Office 365 client assessment which evaluates the compatibility of Office suites deployed in your environment with Office 365, via a hardware and software readiness assessment. This assessment helps you quickly determine which client computers in your environment are ready to use Office 365. The tool obtains machine level detail about why a given computer is not capable of using Office 365, and identifies whether the Office suites currently bing used in your environment are compatible with Office 365.

  • Windows Azure beats Amazon EC2, Google App Engine in cloud speed test

    http://arstechnica.com/business/news/2011/10/windows-azure-faster-than-amazon-ec2-and-google-app-engine-in-yearlong-cloud-speed-test.ars

    Microsoft’s Windows Azure has beaten all competitors in a year’s worth of cloud speed tests, coming out ahead of Amazon EC2, Google App Engine, Rackspace and a dozen others.

    The independent tests were conducted by application performance management vendor Compuware using its own testing tool CloudSleuth which debuted last year. Anyone can get results from the past 30 days for free by going to the CloudSleuth website, but this is the first time Compuware has released results for an entire 12-month period.

    Compuware uses 30 testing nodes spread around the globe to gauge performance of the cloud services once every 15 minutes. The company performed 515,000 tests overall for a year’s worth of data covering August 2010 to July 2011, which Compuware released today. Each test requires the loading of a simulated retail shopping site consisting of two pages, one page containing 40 item descriptions and small JPEG images, and the second page containing a single, larger image of 1.75MB.

    The Windows Azure data center in Chicago completed the test in an average time of 6,072 milliseconds (a little over six seconds), compared to 6.45 seconds for second-place Google App Engine. Both improved steadily throughout the year, with Azure dipping to 5.52 seconds in July and Google to 5.97 seconds. Also scoring below 7 seconds for the whole year were the Virginia locations of OpSource and GoGrid along with BlueLock in Indiana. Rackspace in Texas posted an average time of 7.19 seconds, while Amazon EC2 in Virginia posted a nearly identical 7.20. Amazon’s California location scored 8.11 seconds on average.

    Generally, Compuware says a 1.5- to 2-second spread between cloud providers represents a substantial performance difference.

    Compuware admits the test is a limited one.

    “The choice of a web site as the initial target application should be seen as a first step to understanding the availability, responsiveness and consistency of cloud service providers,” the company explains. “While admittedly monochromatic (especially in light of the richness of services provided by cloud providers), the choice reflects the observation that the majority of modern applications rely on the Internet protocols as their transport mechanism. It enables us to create a relatively small and simple application that still gives us great insight into the core performance of cloud service providers. Just as importantly, it can be easily implemented on both PaaS and IaaS cloud providers.”

    Amazon EC2 has proven its worth in many real-world scenarios, including the building of a 30,000-core HPC cluster on EC2 and a separate EC2-based cluster that ranked in the world's Top 500 fastest supercomputers.

    But within the limits of the Compuware test, Azure has improved in relation to its competitors. In July 2010, Network World conducted a test using the CloudSleuth tool and found that both Google App Engine and Amazon EC2’s Virginia location were faster than Windows Azure over the course of a month. (Google App Engine, perhaps because of its distributed nature, is tested by CloudSleuth as a whole rather than from specific locations.)

    Although Compuware tries to make the tests expansive by spreading nodes throughout the world, the results are still highly affected by location. For example, both Azure and Amazon posted poor scores in their Singapore data centers (16.10 seconds for Azure and 20.96 seconds for Amazon, the worst time in the survey) but the discrepancies between North America and Asia are due in large part to limitations in the Compuware testing network. “Within Asia, the performance is generally abysmal by North American standards,” says CloudSleuth product manager Lloyd Bloom. But the measurements are skewed because “most of our measurement points are not in Asia.”

    These times we've been discussing so far have been worldwide averages. North American-only times are generally about twice as fast. Azure continued to be the fastest in the past 30 days, both in North America only and in a worldwide average, according to results pulled from the CloudSleuth website this week. Azure also did well in availability, with its Chicago facility hitting 99.93 percent uptime over the past month, significantly better than the 97.69 percent score posted by Google App Engine, and among the highest overall. Rackspace in Texas achieved 99.96 percent uptime, while Amazon EC2 in California scored 99.75 percent and EC2 in Virginia was up 99.39 percent of the time.

    While Compuware’s results may be a good starting point for customers trying to decide between various cloud services, they’re not perfect. For example, Salesforce’s Force.com cloud isn’t tested, even though it may be the most widely used platform-as-a-service cloud.

    Compuware’s testing isn’t sponsored by any vendor, but several of the vendors in the testing are either Compuware partners or customers, including SoftLayer and OpSource. OpSource chief marketing officer Keao Caindec called the CloudSleuth test easy to use and among the most extensive in terms of the number of clouds tested, but the website loading test doesn’t include everything a potential customer might care about. “There are several ways of looking at load testing,” he noted. Another test is to simulate credit card transactions, which requires several steps. Compuware does provide that sort of testing, Caindec said, but you have to pay for it. Compuware is only willing to give away so much for free, but CloudSleuth does provide some interesting information.

    “It’s been difficult in the past to actually come up with an objective way for our clients to compare us to other clouds,” Caindec said.

    Compuware designed the webpage loading test to be fairly generic. “We wanted to make sure we didn’t play to the strengths of any one specific provider and their ability to accelerate pages,” said Compuware product marketing manager Ryan Bateman.

    While Bateman noted that “some providers are going to fare much better because they’re geographically closer to the locations in which we’re testing from,” CloudSleuth provides enough granular data to help customers figure out the differences. For example, on CloudSleuth.net, you can click the measurement location in, say, New Jersey or Argentina and see a ranking of all the cloud providers from that location. Depending on the application or services you want to host with a cloud provider, the location-specific data may be more important than the global average.

  • Open Beta now available for download: Service Management for the Private Cloud

    The Solution Accelerators Microsoft Operations Framework team is working on a new white paper, Service Management for the Private Cloud. Based on your participation in the IPD Beta Program, we believe you will find this white paper useful in your journey to the cloud. We hope you'll take the time to preview and provide feedback on our new beta release.

    Get the download

    To download the Beta version of this Solution Accelerator, click here.

    Tell us what you think

    The Beta review period runs through October 7, 2011.

    Download the beta guide and provide us with your feedback, especially in the areas of its usefulness, usability, and impact. Send an email with your input toMOF@microsoft.com by Friday, October 7.

  • Private Cloud Case Study - National Bank of Kuwait Eases Data Center Constraints by Building a Private Cloud

    National Bank of Kuwait wanted to move from mainframe technology to a model based on Windows, but power limitations prevented the use of the hundreds of servers it needed. To achieve their goal within the constraints of available power, they implemented a virtualized solution based on Windows Server 2008 R2 Hyper-V technology, consolidated 75 percent of their infrastructure, and developed a foundation for a private cloud.