Cristian Edwards

A piece of virtualization with Hyper-V and SCVMM

Hyper-V 2012 R2 Network Architectures (Part 8 of 7) – Bonus

Hyper-V 2012 R2 Network Architectures (Part 8 of 7) – Bonus

  • Comments 8
  • Likes

Hi again, this post was not planned but today one MVP (Hans Vredevoort ) explained me an additional Backend architecture option that is also recommended. He did a similar diagram to represent this option, so let’s figure out what is different.


This configuration is slightly different compared with the Dynamic Backend QoS in part 5 because is simplifying the Backend configuration and only presenting 4 Multiplexed adapters to Windows. Once this is done, you just need to create a TEAM with two of these adapters for all the networks required on the parent partition, like Mgmt, CSV and Live Migration.

On top of the TEAM, you have to create a tNIC (Teaming NIC) for each traffic with the corresponding VLAN. This gives isolation between traffics and control from the OS, while allowing to dynamically create more tNICs if needed without having to reboot the Hyper-V host. (you may need additional tNICs for SMB for example)

Everything can be managed either by PowerShell or the NIC TEAM Console (lbfoadmin.exe) and of course RSS will give you the best possible performance from the adapters presented by the backend.

The TEAM and the vSwitch configuration on the right side of the diagram is identical to the Dynamic Backend QoS discussed in part 5, so the VMs/Tenants traffic is completely isolated from Mgmt, CSV and Live Migration traffics.

One caveat on this configuration is the QoS for each parent partition network traffic type. How I can prioritize Mgmt over CSV or viceversa? I would answer this questions with another question… do you really need to prioritize the traffic of your tNICs when there is already a QoS policy at the backend that will guarantee at least the amount of bandwidth that you define?

Is it true that in part 5 architecture each parent partition traffic will have his own QoS in the backend, but it is also true that the sum of them can be equivalent to what you may want to reserve when using the tNICs approach.

Many thanks to Hans Hvredevoort for the architecture reference.

  • First, Great article! Thanks for taking the time to share your deep knowledge and experience.
    I have some few questions, I hope you have some time to spare.

    Whats better for VM traffic using CNAs, Host Dynamic NIC Teaming with dVMQ or SRV-IO with VM NIC Teaming?

    Regarding bonus, part 8. HP FlexFabric uplinks will it be better to hace LACP o A/S SUS? will LACP cause MAC Flap?

    Also regarding bonus, part 8. Why 10GB Network? where is FCoE? My opinion is Part6 (4GB FcoE + 6GB LAN) mix with part8 (tNIC part), what do you think?

    Kindest Regards,

    Juan Quesada

  • Hi Juan,
    for question #1
    is not possible to say what is better between dVMQ or SR-IOV. It really depends on your workload.SR-IOV is more intended for low latency workloads but maybe dVMQ will already give you what your app needs.
    for question #2
    for question #3
    Not really. Part 8 is about creating tNICs on top of the Windows Teaming. The backend may or may not use converged networks like Virtual Connect or CISCO UCS.

  • Hello Cristian! Great series! What about another scenario - same to this, but without backend converged configuration. Just QoS configured in server itself? Maybe useful if you have only 1G adapters?

    New-NetQosPolicy –Name "Live Migration policy" –LiveMigration –MinBandwidthWeightAction 50
    New-NetQosPolicy –Name "Cluster policy" -IPProtocolMatchCondition UDP -IPDstPortMatchCondition 3343 –MinBandwidthWeightAction 40
    New-NetQosPolicy –Name "Management policy" –DestinationAddress x.x.x.x/24 –MinBandwidthWeightAction 10

  • Hello Cristian, thank you for that great post series! I have a question, in the part 8 of 7 you describe an interesting solution, witch has the RSS capability on the management Team.
    We engineered, that the Hyper-V Switch limit the amount of IOPs throughput about 35% from the optimum. Therefore it is a smart idea to decouple that traffic. In your picture you write in the upper right corner, that “configuration required ..”. How would you realize that if you would try to use this in combination with SCVMM 2012 R2 bare-metal deployment features?

  • @Andre While in Houston at TechEd NA, we discussed this option with the VMM team. There is a high chance that this teaming configuration will show up in a future version of VMM.

  • Did this configuration and everything worked awesome except for one thing, one of the uplink LAN Swtich failed, as such mgmt team went down (only 1 out of 2 adapters went down however team went down because of NIC teaming convergence (I have configured SI Dynamic Team with all adapters active)) Because mgmt team went down for about 2 min cluster communication got lost taking down several VMs (about 25% out of a 200VM 4 node cluster).
    How can I overcome this? I'm thinking of having 3 teams, one for VMs (Active/Active), one for the domain network (Active/Pasive) and one for LM and CSV (Passive/Active). So if one Switch fails, at least one Network will not enter into convergence since already has a passive lane always maintaning full a live cluster heartbeat. has anyone encounter this? Solved it?

Your comment has been posted.   Close
Thank you, your comment requires moderation so it may take a while to appear.   Close
Leave a Comment