A blog by Jose Barreto, a member of the File Server team at Microsoft.
All messages posted to this blog are provided "AS IS" with no warranties, and confer no rights.
Information on unreleased products are subject to change without notice.
Dates related to unreleased products are estimates and are subject to change without notice.
The content of this site are personal opinions and might not represent the Microsoft Corporation view.
The information contained in this blog represents my view on the issues discussed as of the date of publication.
You should not consider older, out-of-date posts to reflect my current thoughts and opinions.
© Copyright 2004-2012 by Jose Barreto. All rights reserved.
Follow @josebarreto on Twitter for updates on new blog posts.
One of the questions regarding Hyper-V over SMB that I get the most relates to how the network should be configured. Networking is key to several aspects of the scenario, including performance, availability and scalability.
The main challenge is to provide a fault-tolerant and high-performance network for the two clusters typically involved: the Hyper-V cluster (also referred to as the Compute Cluster) and the Scale-out File Server Cluster (also referred to as the Storage Cluster).
Not too long ago, the typical configuration for virtualization deployments would call for up to 6 distinct networks for these two clusters:
These days, it’s common to consolidate these different types of traffic, with the proper fault tolerance and Quality of Service (QoS) guarantees.
There are certainly many different ways to configure the network for your Hyper-V over SMB, but this blog post will focus on two of them:
Both configurations presented here work with Windows Server 2012 and Windows Server 2012 R2, the two versions of Windows Server that support the Hyper-V over SMB scenario.
Configuration 1 – Basic fault-tolerant Hyper-V over SMB configuration with two non-RDMA port
The solution below using two network ports for each node of both the Compute Cluster and the Storage Cluster. NIC teaming is the main technology used for fault tolerance and load balancing.
Configuration 1: click on diagram to see a larger picture
Configuration 2 - High-performance fault-tolerant Hyper-V over SMB configuration with two RDMA ports and two non-RDMA ports
The solution below requires four network ports for each node of both the Compute Cluster and the Storage Cluster, two of them being RDMA-capable. NIC teaming is the main technology used for fault tolerance and load balancing on the two non-RDMA ports, but SMB Multichannel covers those capabilities for the two RDMA ports.
Configuration 2: click on diagram to see a larger picture
I hope this blog posts helps with the network planning for your Private Cloud deployment. Feel free to ask questions via the comments below.
Your timing is perfect. I have four new servers arriving any day now that I'm going to use for this exact scenario. Getting the networking right and being able to leverage RDMA has been my main concern. You've had some very useful posts, but this one cuts right to the heart of the matter. Thanks.
Great post - nice to see in one spot the two primary network config options. A question on config two which is showing two additional switches. Is is not possible to connect the RDMA enabled ports into switch 1 and 2?
There are reasons to keep a separate set of switches for your RDMA traffic:
Having said all that, there are situations where a single set of switches will do for everything.