Cloud Insights from Brad Anderson, Corporate Vice President, Enterprise Client & Mobility
Back in mid-April, I discussed the importance of data center high availability, and how the cost of data storage is minimal compared to the cost of not being able to access that data. There are a lot of options for creating a highly available system with disaster recovery and data backup protocols – but many of the current in-market options are expensive, labor intensive, and ineffective.
Typical disaster recovery services, for example, are surprisingly complex and require an array of SAN replicators with symmetric hardware on both sides, and typical recovery times from a secondary backup are far too high for an enterprise organization. (For more on this, read up on Recovery Time Objectives and Recovery Point Objectives)
More and more customers are finding a solution for this by moving DR and backup services to the cloud
The reasons for doing this are simple. Enterprises can now take full advantage of the added functionality and capacity within the Microsoft cloud, while maximizing the ease and cost-efficiency of this move (regardless of the size of the company). This remains true no matter where that secondary site is located – whether it is within your own datacenter, with a service provider, or hosted in Windows Azure, it is always up and always available.
The cloud and its accompanying virtualization solutions offer a huge upgrade for the countless companies that are still using tape, offsite backups, or even warm standby sites. With a cloud-based model, the storage costs are dramatically reduced and – better yet – Azure Storage provides geo replication which creates a replica of your data in a different Azure datacenter. This helps to protect against local disasters and ensure your data is accessible.
I know that a certain bias is implicit considering my role here at Microsoft, so I encourage you to weigh your options and don’t just take my word for it that the Microsoft cloud offers a better option. Regardless of your organization’s IT infrastructure, consider a couple important examples of how Windows Server 2012 and Windows Azure have continuous data availability built into every aspect of their design, with world-class features like:
These features ensure that, even in the face of a disaster, you can work with an incredibly consistent management experience across your clouds. In addition to this consistent user experience, Windows Azure seamlessly interoperates with Windows Server 2012 to act as an extension of the customer’s data (and vice versa) by supporting reliability across VM’s, seamless networking, identity federation, and compute elasticity. Each of these features, in turn, supports a long list of scenarios like failover/failback, item level recovery, workload migration, bi-directional VM mobility, patch validation, and more.
I don’t mean for all of this to start sounding like a pitch from the marketing department, but I do want to paint a picture of a comprehensive and complex in-market disaster recovery solution. I’m extremely proud of what we have been able to deliver for our customers, and I’m confident saying that no other company – or combination of companies – can offer a solution like this. Looking ahead, we are constantly working to develop increasingly streamlined ways to integrate solutions like these – and this integration is a top priority as we continue to refine and innovate these tools.
These products essentially change the way our customers and partners think about (and react to) disaster recovery. It also changes the relative magnitude of one of these events. With the Microsoft cloud, a massive system failure may no longer be a matter of life and death for the business – instead, it may just be a matter of minutes.