Kevin Remde's IT Pro Weblog
IT Pro Resources
TechNet EventsMicrosoft Security Response CenterTechNet IT Manager Community HubMicrosoft Virtual AcademyKevin’s Evaluation Download Center
IT Pro Evangelist Blogs
Blain Barton Blain Barton's Blog@BlainBar
Brian LewisMy Thoughts on IT...@BrianLewis_
Dan Stolts IT Pro Guru Blog@ITProGuru
Jennelle Crothers TechBunny@jkc137
Keith MayerIT Pros ROCK!@KeithMayer
Kevin Remde Full of I.T.@KevinRemde
Matt Hester Matthew Hester's WebLog@MatthewHester
Tommy PattersonVirtually Cloud 9@Tommy_Patterson
Yung Chou Yung Chou on Hybrid Cloud@YungChou
Over the past several weeks, my teammates have all contributed to a very valuable series of blog articles entitled “Disaster Recovery Planning for IT Pros”. They’ve covered topics such as how to get started in planning, and Server Virtualization and how it applies to Disaster Recovery, and testing your recovery plans. And they’ve discussed technologies that can help – the tools in your DR tool belt - such as Hyper-V Replica, Windows Azure, and the newer Windows Azure Hyper-V Recovery Manager.
For the full list of excellent articles, CLICK HERE.
“What is an Offline Backup?”
Before I dive in here, I want to be clear about what I’m covering. For the purpose of this discussion, I’m not talking about tapes or off-site storage of backed up data. That’s something more commonly called an archive. Regular storage and archival for recovery from past history is an important (and big) topic in-and-of-itself; perhaps the topic of another blog series for another day. For this article, however, I’m talking about having a copy of some important digital asset that was saved in a way that can safely and fully be recovered as a complete unit, in case the original location is unable to house that asset. (Yeah.. a disaster.) That digital asset could be a server OS installation, a directory, a database, a virtual machine, a desktop image, file storage, an application; really whatever you consider valuable and worth the effort (and cost) to have protected in a way that can be quickly restored if the worst should happen.
“Do I really need an offline copy these days?”
That’s a fair question. With all of the excellent (and many now built-in and included) technologies in modern operating systems such as Windows Server 2012 R2 , it could be argued that you don’t really need to create backups of some items. A virtual machine will start running on another cluster node if the hosting node fails, and the storage supporting that machine could be on always-available file server cluster (Scale-Out File Server), with redundant paths to the storage, and supporting arrays of disks that, if they or the controllers that support them fail, are redundant and easily replaced even while the data continues to be available. (And I haven’t even touched on application availability within a virtual machine or the benefits of virtualization guest-clustering.)
But even with all of this great technology, not all data or files or applications are equally important, and not all are worth the same amount of investment to ensure their availability and – important to our DR topic – their recovery in case of a really bad thing (disaster) happening.
The case for the offline backup will be determined by these factors:
“How important is your data?”
As part of the planning process (which Jennelle introduced to you early in our series), you’ll take an inventory of all of your digital assets, and then make a priority list of all of those items. The priority should be in order of MOST critical (i.e. My business can’t function or survive without) , to LEAST critical (no big deal / can rebuild / etc) assets. Now, going on the assumption that at some point your datacenter is turned into a “steaming pile”, you’ll draw a line through your list. Items above the line are critical to your business overcoming the disaster. Items below the line.. not worth the investment in time or effort. (Note: that line will shift up or down as you work through this, as you get into actually figuring out the costs associated with your plan, and importantly as you re-evaluate on a regular schedule your disaster preparedness.)
For each of your inventoried and prioritized digital assets you’re also going to be defining a couple of objectives – the Recovery Point Objective (RPO) and the Recovery Time Objective (RTO).
The RPO is the answer to the question: “How much data can I afford to lose if I need to recovery from an offline copy of that data?” In its simplest terms, it decides how often or how frequently I make a new offline copy of that asset. An example in Hyper-V Replication would be the setting that determines how frequently a new set of changes are replicated to the virtual machine’s replica copy. If I’m replicating changes every 5 minutes, then at most I could lose up to 5 minutes of changes should the worst happen, so my RPO is 5 minutes. Is that good enough? Maybe. It depends on what that virtual machine is actually doing for you.
The RTO describes how long you’re willing to wait to bring that digital asset back online. If I’m still doing all of my backups to tape, and then shipping those tapes offsite, it’s going to take a lot longer to recover at a new location than it would, say, to take advantage of another site and/or Windows Azure to host your stored backups. Can I afford to wait a day or two? An hour? A few minutes? How critical that asset is to your business continuity will help you set a desired RTO for that item.
“How much are you willing to spend?”
Again, there are expensive and there are cheaper options for addressing the RPO and RTO, and you’ll ultimately base how much you’re willing to invest by the relative importance of the digital assets in bringing your company back quickly (or as reasonably quickly) from disaster.
“And once I’ve implemented everything, I’m done?”
Of course not. You’ll regularly test your recovery. And less frequently – but still critically – you’ll occasionally re-evaluate your priority list and the methods for meeting your objectives. Of all people, IT Pros know how quickly technology evolves. What seemed like a good, solid plan and a decent implementation of tools last year may not fit as well today now that newer/better/faster/cheaper options are available. And that’s not even considering the shifting nature of your own environment, the servers, the applications, the growth of data.. all need to be re-considered on a regular basis.
There is definitely a case for offline backup. What and how you do that backup will be defined by you, based on priority, and adjusted by cost. And making those decisions and implementing your plan isn’t the end of the process. You must revisit, re-inventory, re-prioritize, adjust and test your plans on a regular schedule.
This is post part of a 15 part series on Disaster Recovery and Business Continuity planning by the US based Microsoft IT Evangelists. For the full list of articles in this series see the intro post located here: http://mythoughtsonit.com/2014/02/intro-to-series-disaster-recovery-planning-for-i-t-pros/
Yes, in the U.S. we’re coming to a town near you with some free training and hands-on experiences again – this time covering Windows Azure and System Center 2012 R2 to support hybrid cloud scenarios.
Here’s the description of the event from the registration pages:
Build Hybrid Cloud Solutions with Windows Azure and System Center 2012 R2 You CAN have the best of both worlds! With Windows Azure and System Center 2012 R2, IT Pros can easily extend an on-premises network to embrace the power and scale of the cloud – securely and seamlessly. These Hybrid Cloud scenarios present real solutions that you can implement today to solve pressing IT issues such as: Managing more data without more hardware Protecting Data with Off-site Backups Business Continuance and Disaster Recovery Cost-effective, on-demand access to Dev/Test Environments Internet-scale Web Sites… And MORE! Join us at this FREE full-day hands-on event to experience the power of Hybrid Cloud. Our field-experienced Technical Evangelists will guide you through the process of jumpstarting your knowledge on Windows Azure Storage, Virtual Machines and Virtual Networking for key IT Pro scenarios. Complete all of the hands-on labs and you’ll walk away with a fully functional Windows Server 2012 R2 or Linux cloud-based test lab running Windows Azure!
Build Hybrid Cloud Solutions with Windows Azure and System Center 2012 R2
You CAN have the best of both worlds! With Windows Azure and System Center 2012 R2, IT Pros can easily extend an on-premises network to embrace the power and scale of the cloud – securely and seamlessly. These Hybrid Cloud scenarios present real solutions that you can implement today to solve pressing IT issues such as:
Join us at this FREE full-day hands-on event to experience the power of Hybrid Cloud. Our field-experienced Technical Evangelists will guide you through the process of jumpstarting your knowledge on Windows Azure Storage, Virtual Machines and Virtual Networking for key IT Pro scenarios. Complete all of the hands-on labs and you’ll walk away with a fully functional Windows Server 2012 R2 or Linux cloud-based test lab running Windows Azure!
“Sounds great, Kevin! Where are you going to be, and where can I sign up?”
Here are the list of cities in the central part of the U.S.:
** I’ll personally be facilitating the events in Minneapolis (Edina) and Saint Louis (Creve Coeur).
Don’t wait to register. Seating is limited and these tend to fill up quickly. For the full list of events throughout the U.S., go here: http://aka.ms/AzureITCamps
See you there!
IT Professionals: Have you ever been deep in the guts of a gnarly infrastructure deployment, automation, configuration, trouble-shooting or similar task, and thought to yourself something like this:
“Why didn’t that darn product team at Microsoft make this tool work better, more like what I need it to do?”
If you see yourself in this story, we have an opportunity for you!
The Windows Server System Center design and development team is looking for IT Pros with knowledge & experience in all aspects of infrastructure and services management. We need to know how to make Microsoft technologies supporting these scenarios work better for you.
IT Pros like you with these specialized knowledge and skills are hard to find, so we’ll make it worth your while. Here’s what you get from participating in the panel:
1. Opportunities to influence WSSC design and development in areas such as
2. A thank you gift! After participating in a study, you’ll have the option of selecting from a list of Microsoft software, hardware, games, and more. (And I’m told that since IT Pros are the hardest folks to find, you’ll get the best gifts!)
If you’d like to be considered for the IT Pro User Panel, please complete this brief survey.
If you want to know more about Microsoft User Research overall, see the Microsoft User Research page.
Note: Microsoft full or part-time employees, vendors, or contingent employees are not eligible. Sorry.
At our IT Camp in Minneapolis a few weeks ago, Kris asked me a valid question.
“How does a Hyper-V Server (not the full windows install) do clustering if you can’t install the cluster role?”
Kris’s question is a good one, because we had just been discussing how limited Hyper-v Server is in terms of what it can do, while at the same time supporting all of the same virtualization features and scale that a full Windows Server 2012 R2 machine with the Hyper-V Role installed does; including the ability to be joined to a Windows Failover Cluster.
In fact, during the discussion of what roles are supported on Hyper-V Server, I showed this screenshot:
So if you thought Windows Failover Clustering was a role, you’d be scratching your head at this one.
“Ah.. so what’s the catch?”
No catch. Windows Failover Clustering is not a role. It’s a feature!
It’s actually in the list of features rather than roles. It’s not enabled by default, and the Create Cluster wizard actually enables it when needed.
PS – In case you missed it, we did a 6 week series of blog articles comparing VMware to Microsoft virtualization. Check it out, here: http://aka.ms/VMWorMSFT.