2012 R2 2016 Hyper-V Quick Tips SCVMM Windows Server

Managing a Hyper-V Datacenter – Host Networking – Teaming

 

There are many discussion topics around host networking that need to be considered. I will extend this post over time to include what I can, but many of these are dependent on your requirements.

Some of the considerations include:

  • Teaming options (this post)
    Converged design
    QoS policies
    SDN architecture
    Security & Edge Networking
    Switch high availability
    VMQ’s and RSS

 

Teaming options

 

Typically when discussion teaming options we need to determine our teaming mode and the algorithm used. Some of these decisions are made for you by your infrastructure options and/or requirements.

Before we continue, never use vendor teaming software. Ever. You might be lucky, but it’s typically rubbish and it is also not supported anymore. Got it? Great 🙂

 

Network Teaming modes:

There are 3 teaming modes supported in Hyper-V; LACP, Switch Independent and Switch Independent.

LACP is great as it gives you aggregated speeds inbound and outbound. LACP requires switch level configuration though and to get switch level redundancy they must support distributed LACP.

Switch Independent only aggregates speed outbound. As the name suggests, Switch Independent does not require any switch configuration so is much more flexible.

Switch Dependent is the third option but I have never used this myself nor witness any clients opt for this. I have no experience to share here other than if you can use LACP or Switch Independent then I recommend you opt for one of those.

LACP is the best for pure throughput performance. You should weigh up the options for both and consider whether you actually need the performance gain LACP will bring. Plus if this outweighs the flexibility of a Switch Independent option.

What do we use? My current default preference is Switch Independent. I find from a management and flexibility perspective that this gives me the best experience. This decision was also influenced by our 3rd fabric generation being a blade fabric which has no option for LACP in the host. When we had rack servers only we would always configure LACP. Since switching to Switch Independent, our network management overhead has been slightly reduced and the added flexibility has definitely been a welcomed change.

 

Load balancing Algorithm:

Once you’ve chosen your teaming technology, the second choice here is your load balancing algorithm. This will require some testing in your environment to work out which best suits your needs but essentially your choices are Dynamic, Address Hash or Hyper-V Port. A great read on the differences and where policy based design decisions need to be considered, you should read this excellent article from Darryl van der Peijl

As mentioned in Darryl’s post, Dynamic is the default when configuring a LBFO team and is often recommend as the best. My experience on this has shown that from a pure performance perspective, Hyper-V port often wins the battle for achieving the best networking performance inside your VMs. This is often communicated back from my clients experiencing back the same experience. “Our environment has gone from Mini Cooper to Ferrari under heavy workloads simply by changing everything to Hyper-V Port profiles” was a response from one client.

Also I have had feedback that Citrix NetScalers might not support Dynamic. I haven’t verified this personally, but something for you to consider.

The caveat with Hyper-V port is that it will not dynamic load balance traffic across your team and “Each Hyper-V port will be bandwidth limited to not more than one team member’s bandwidth because the port is affinitized to exactly one team member at any point in time.“. My nodes have 4x 10GbE connectivity and each blade enclosure has an 80GbE fabric uplink that rarely bursts over 50% utilization. I accepted the 10GbE limitation in the design of our existing generation.

As mentioned, we have multiple 10GbE virtual connect fabrics so I prefer to use Switch Independent across the board. For LBFO teams that are for VM traffic I select Hyper-V port, whilst any other non-VM based teams I set to Dynamic.

Below is the Teaming config on one of our production environment hosts. From a pure performance perspective, this was the optimal configuration.

clip_image001

A quick run-through on how to build this in SCVMM can be found here.

That all said, I recommend you thoroughly test this and review your internal policy for failover, but for me the increased performance gain we experienced with Hyper-V Port far outweighs the single NIC limitation. I would love to hear your feedback on this topic.

As usual, your appetite may differ from mine so don’t take the above as gospel and ensure test thoroughly. Then test it again….

Enjoy!

Dan

Leave a Reply