File servers are a key foundational workload in most organizations. In addition to providing shared folders for document collaboration, storage and archiving, they can also serve as a data repository for other key business applications. As such, file servers are a kind of “network utility” for data storage, without which most organizations would find it hard to continue operations.
When planning a disaster recovery strategy, shared file services are often one of the first workloads for which organizations find themselves needing an effective recovery approach. Of course, you may already have a traditional backup plan for recovering file servers, but have you tested it recently? Many organizations find that traditional restore processes for recovering large file servers take too long and do not meet Recovery Time Objective (RTO) expectations. In addition, some data may be very volatile, and performing only traditional daily backups may not be meeting your Recovery Point Objectives (RPO) either.
Traditional backups are still needed and are a good first step in preparing yourself with basic recovery options. But, to reduce recovery time and data loss to tolerable limits that meet your RTO and RPO, you’ll want to consider augmenting a backup strategy with more agile recovery options. In this article, I’ll discuss one such recovery scenario for key file server workloads: implementing a file server replication strategy to a disaster recovery site leveraging Windows Azure and Windows Server 2012 R2. And, the good news is … we’ll be using capabilities that are native to these products, so you won’t need to purchase expensive storage solutions or third-party software.
Lots of organizations have “single-site” networks or may lack a dedicated disaster recovery site. If your organization falls into this category, you should take a look at Windows Azure! As an on-demand cloud platform, Windows Azure can provide a really cost-effective alternative to a traditional DR site, and one that can be “brought online” much more quickly.
In fact, for many of the IT Pros I speak with, Disaster Recovery in the Cloud is a key area of interest even if they already have a dedicated DR site – to help reduce their costs associated with disaster recovery.
To build an effective file server replication solution for disaster recovery purposes, we’ll be leveraging the built-in Distributed File System Replication (DFS-R) engine in Windows Server 2012 R2. Actually, DFS-R has been available as a core technology of Windows Server since the days of Windows Server 2003, but nearly every major release of Windows Server since then has brought significant new enhancements to DFS-R for improved scalability, performance and management. In Windows Server 2012 R2, we’ve tested DFS-R for asynchronously replicating up to 70 million files per volume with a combined size of up to 100TB of data!
In fact, we’ll be leveraging a new feature of DFS-R in Windows Server 2012 R2 to speed up the initial replication of a large set of files: DFS-R database cloning.
To leverage DFS Replication, there are a couple dependencies that we’ll need to keep in mind prior to configuring DFS-R:
By implementing these dependencies first, you’ll not only be supporting the needs of DFS-R for disaster recovery. Rather, you’ll be paving the way for your organization to leverage the cloud for lots of key scenarios that can improve agility and reduce costs.
In our file server disaster recovery scenario, we’ll be stepping through the following tasks:
By the time we’re done, our scenario will resemble the diagram below.
File Server Disaster Recovery Scenario with Windows Azure and DFS-R
To proceed through the tasks in this article, you’ll need an active Windows Azure subscription. If you don’t yet have a Windows Azure subscription, you can activate a free subscription via our Windows Azure trial program.
To begin these tasks, sign-in to the Windows Azure Management Portal with the user credentials that you used when activating your Windows Azure subscription.
In addition to this scenario, you may also be interested in the following additional step-by-step cloud scenarios:
See you in the Clouds!
Be sure to check out these additional resources:
Keith Mayer is a Senior Technical Architect at Microsoft, focused on helping ISV partners leverage the Azure cloud platform. Keith has over 20 years of experience as a technical leader of complex IT projects, in diverse roles, such as Network Engineer, IT Manager, Technical Instructor and Consultant. He has consulted and trained thousands of customers and partners worldwide on design of enterprise technology solutions.
Keith is currently certified on several Microsoft technologies, including Azure, Private Cloud, System Center, Hyper-V, Windows, Windows Server, SharePoint, SQL Server and Exchange. He also holds other industry certifications from VMware, IBM, Cisco, Citrix, HP, CheckPoint, CompTIA and Interwoven.
You can contact Keith online at http://aka.ms/AskKeith.