As an IT guy I have the strong belief that engineers understand graphics and charts much better than bullet points and text, so the first thing I will do is to paste the following diagram
At first sight you can recognize from left to right that there are 6 Physical Networks cards used in this example. You can also recognize that two of these adapter on the left are 1GB adapters and the other four green adapters are 10GB adapters. These basic considerations are really important because they will dictate how your Hyper-V Cluster nodes will perform.
On top of the 6 Physical Network cards you can see that some of them are using RSS and some of them are using dVMQ. Here is where things start to become interesting because you might wonder why I don’t suggest to create a big 4 NIC team with the 10GB adapters and dismiss or disable the 1GB adapters. At the end of the day, 40GB should be more than enough right?
Well, as a PFE, I like stability, high availability and robustness in my Hyper-V environments, but I also like to separate things that have different purposes. Using the approach from the picture above will give me the following benefits:
So, as you can see, this setup has a lot of benefits and best practice recommendations. It is not bad at all and maybe there are other benefits that I’ve forgotten to mention… but where are the constraints or limitations here with this Non-Converged Network Architecture? Here are some of them:
Maybe I didn’t gave you any new information regarding this configuration, but at least we can see that this Architecture is still a good choice for several reasons. If you have the hardware available, you certainly have the knowledge to use this option.
Let’s see you again in my next post where I will talk about Converged Networks Managed by SCVMM and Powershell
The series will contain these post:
1. Hyper-V 2012 R2 Network Architectures Series (Part 1 of 7 ) – Introduction (This Post)
2. Hyper-V 2012 R2 Network Architectures Series (Part 2 of 7) - Non-Converged Networks, the classical but robust approach
3. Hyper-V 2012 R2 Network Architectures Series (Part 3 of 7) – Converged Networks Managed by SCVMM and Powershell
4. Hyper-V 2012 R2 Network Architectures Series (Part 4 of 7 ) – Converged Networks using Static Backend QoS
5. Hyper-V 2012 R2 Network Architectures Series (Part 5 of 7) – Converged Networks using Dynamic QoS
6. Hyper-V 2012 R2 Network Architectures Series (Part 6 of 7 ) – Converged Network using CNAs
7. Hyper-V 2012 R2 Network Architectures Series (Part 7 of 7 ) – Conclusions and Summary
8. Hyper-V 2012 R2 Network Architectures (Part 8 of 7) – Bonus
How would you recommend the network setup if there were 4 1Gb NICs and 2 10Gb NICs in each host?
It looks like you have *not* included any reference for NICs dedicated to storage, is this correct? There is a CSV dedicated NIC, but this isn't necessarily that same NIC (or more) that is used to speak to the storage directly. (I believe your CSV network
corresponds to the "Cluster traffic" network as documented in the TechNet article "Network Recommendations for a Hyper-V Cluster in Windows Server 2012"
Big B, little b, what begins with B? One of my favorite Dr. Seuss books. But in computers it makes a big difference. Capital B means 'bytes'. Lower-case b means 'bits'. You have consistently used capital B in your designation of NIC speeds, making the
article confusing. At least until one notices that you are consistent in your misrepresentation. A simpler representation for NIC speed is 10 GE or 1 GE rather than the longer form of 10 Gbps or 1 Gbps.
Another confusing point ... In your first bullet point you state "RSS and dVMQ are mutually exclusive". Then in a later bullet you state "You can dedicate two entire 10GB Physical Adapters to your Virtual Machines using a LACP Team and create the vSwitch on
top. dVMQ and vRSS will help VMs to perform as needed ". They are mutually exclusive, but if you create a team they work together? Which is it? More clarification, please.
RSS and vRSS are not the same. vRSS is the virtual version of RSS and only applies to VMs. This feature requires VMQ to work and that's why both can work together.
RSS without "v" is only exposed when no vSwitch is created on top of the NIC nor the TEAM.
About 10GB.. fair point... I will fix it when I have a second. However I guess everybody understand the point and what I mean...
What is the best way to handle iSCSI traffic in this architecture?
from an operations point of view / troubleshooting point of view, wouldn't it be easier to have all three NICs pairs configured as LACP teams? In case of errors I could use the same troubleshooting methodology for all NICs.
I don't see why I wouldn't want to have the CSV/LM traffic flowing through a LACP team?
If the design was meant to show that RSS and VMQ can be used concurrently / for different purposes, I get it.
And: Very interesting series, dense information, tough to digest, thanks for sharing!