Any good disaster recovery project requires the replication of the most critical workloads to somewhere other than the primary site in which they would normally operate.  This can be done in several ways, whether the solution is delivered via storage, application, operating system or hypervisor replication, IT shops have been performing replication for many years.  Let’s talk through each style of replication and the challenges that go along with each:

Storage Replication – The storage array(SAN or NAS) is licensed to replicate the data and/or virtual machine configuration files from the primary site to another storage array(usually of like make and model but not always, for instance NetApp, EMC, Falconstor.)  This particular strategy is fairly common for larger IT shops and relies on the storage vendor devices to perform the work of maintaining asynchronous or synchronous copies at the secondary or tertiary sites. 

  • The Good – Replication workload is performed well away from the operating system, application, or hypervisor thus allowing those particular layers to maintain performance.   Storage array replication is normally done at a block level and snapshot features on the storage arrays are generally intelligent in that minimal changes to the blocks will need to be replicated, since for a SAN or NAS, blocks are their business.
  • The Bad – Requires sometimes expensive licensing on both the primary, secondary, and tertiary sites storage platforms.  Requires like make and model of storage devices in each facility.  Requires storage administrator and ongoing monitoring of disk space for replication workloads at multiple devices and locations.  Sometimes storage manufacturers fall slightly behind the most recent hypervisors, operating systems, and applications in their support of new features or operation which may hold back IT shops from upgrading any or all of their infrastructure.  Third party software that attaches to storage devices to perform array replication can also be introduced at this juncture but can then also muddy the admin waters a bit with complications. 

Application Replication – Some software applications include a feature to replicate the settings and data to another server running the same software.  This was perhaps the first form of replication for some IT administrators to tackle.  For instance Active Directory replicates across wide area networks and provides for resiliency when one domain controller goes down. 

  • The Good - This method is often the easiest form of replication and often the least expensive to implement.  Some software packages perform this task without any intervention from the administrator.  Synchronization times are sometimes immediate across the infrastructure. 
  • The Bad – Only certain software packages perform this model of replication.  Some more advanced software packages for instance database servers may require more expertise to manage and maintain.  The most critical piece of this replication model is that it requires knowledge and time to monitor and maintain the replication with various methods for each software performing replication.  So multiple software types might require more IT staff knowledge and in the event of a disaster can take significant time to identify and correct issues per each software installation. 

Operating System Replication – This method of replication first became popular when multiple applications resided on the same server and the need to have an identical copy of the server OS and applications was required.  This method is not that common anymore since clustering and virtualization have come into play.  This type of replication generally involves third party software geared toward this type of solution.  

  • The Good – Can be used to maintain physical server replicas when no virtualization or storage array replication is available or applicable.  Third party software support can generally assist with setup and configuration, and some offer monitoring services.
  • The Bad – With many servers in a given environment this method could become an administration nightmare, but is good in smaller environments or those shops with many administrators.  This type of replication can sometimes be difficult to monitor and may at times limit the IT admin from being able to upgrade to newer service packs if the third party vendor is not able to keep up with the dynamics of the industry.

Hypervisor Replication – This method of replication is quickly becoming the style of choice in most IT shops.  The ability to replicate virtual machines to secondary or tertiary facilities, and manage the replication from a centralized tool.  This replication style does not need to know the underlying operating system or applications, nor the storage array device in play. 

  • The Good – Hypervisor snapshots or checkpoints can be set to quiesce the underlying workloads with no impact to the end users.  This method of replication can be scaled up or down from a bandwidth perspective rather easily.  Centralized management of replication jobs and settings which also include DR plans or playbooks can be easily implemented via services like VMware Site Recovery Manager or Azure Hyper-V Recovery Manager. 
  • The Bad – Expertise for the virtualization infrastructure is required.  Non-virtualized workloads would require a separate replication style(see above approaches.)  Replication of virtual machine workloads require sometimes idle hosts in the target datacenters which incurs additional costs in the project.  For a complete solution utilizing a service like Azure Hyper-V Recovery Manager also adds a cost to the equation but also allows for a more seamless failover of many workloads in a short amount of time.  Recovery Manager does require System Center VMM installations at each site that is being monitored. 

Enter Hyper-V Replica to the game.  When Windows Server 2012 hit the market, replication came out of the box.  The ability to replicate workloads across the WAN to free Hyper-V installations allowed for an inexpensive method for guarantying up time for critical workloads.  Additionally, in Windows Server 2012 R2 the addition of tertiary replication was implemented allowing for 2 copies of a given workload to stay in asynchronous harmony in the event of a disaster.   Soon thereafter, Windows Azure Hyper-V Recovery Manager was announced in Preview mode and has since been announced as Generally Available, allowing for management of multiple sites.   By layering a System Center VMM implementation into the infrastructure, administrators can then design playbooks and failover strategies to be carried out by Azure Hyper-V Recovery Manager which resides in the Microsoft cloud at all times and hence becomes much less of a risk in the event of a natural disaster.   To get started, checkout these step-by-step guides for implementing Hyper-V Replica as well as VMM and Hyper-V Recovery Manager:

Hyper-V Replica Step-by-Step guide: Server 2012 R2 – Lab Guide – Part 5 – Step by Step – Enabling Hyper-V Replica

Hyper-V Replica via PowerShell Step-by-Step guide: Automated Disaster Recovery with Hyper-V Replica and PowerShell

Hyper-V Replication Considerations Parts 1 and 2: Replication with Hyper-V Replica – Part I – Replication Considerations Made Easy Step-By-Step

Hyper-V Capacity Planner Step-by-Step guide: Guided Hands-on Lab- Capacity Planner for Windows Server Hyper-V Replica

Azure Hyper-V Recovery Manager step-by-step guide: Build a Private Cloud Recovery Plan with Windows Azure Hyper-V Recovery Manager