2016 Hyper-V Quick Tips SCVMM Uncategorized Windows Server

Managing a Hyper-V Datacenter– Converged Networking and RDMA

Server 2016 brings great new feature called Switch Embedded Teaming (SET).

A great overview of SET can be found in this TechNet article. But to summarize, SET basically allows a Virtual Network Adapter to access the features available by RDMA. I won’t go into too much detail on RDMA, there are essentially three types RoCE, iWarp and Infiniband. RoCE is the most common and requires that Datacenter Bridging (DCB) be enabled on your switches as well as the windows feature with the same name being installed. iWarp requires no specific switch configuration but it still recommended to install the DCB feature in your hosts. Infiniband is known as the fastest but it is the most expensive.

There are many documents floating around the internet claiming which is best, typically from vendors who are vested in each technology. A good article here from NetApp makes some good points about vendor interests. It is indeed 3 years old but the arguments are mostly still valid. My default choice for a few reasons has always been RoCE.

clip_image001

As the above diagram taken from the article illustrates, we no longer need dedicated SMB fabric adapters to take advantage of the performance boost we get from RDMA. We can further converge our networking to use the one LBFO to provide all of our data streams. This also slightly reduces the cost per node form a hardware perspective as we only need 2 RDMA supported NICs. Or we can reallocate that saving to faster NICs and start to implement some 25GbE RoCE or some sweet 40/100Gb Infiniband networks.

And as always, check the WSC to ensure your NIC’s are supported!!!

But why is RDMA so important? In a Hyper-V world, one of the benefits of RDMA is greatly improves throughput and reduction in latency for SMB storage and Live Migrations.

With the introduction of Storage Spaces in 2012 and Storage Spaces Direct in 2016, fabric architects are finally free of the extremely overpriced burden of monolithic SANs and gain access to amazing performance and scalability at a significant cost reduction. From a management and complexity perspective, after making the switch from legacy iSCSI/FC to SMB3, you’ll never look back. RDMA gives these technologies a significant performance boost. More on this subject in when we get to the storage discussion..

Another perk to SET switches is this also reduces cabling requirement from 7 cables per node (4 network, 2 power & 1 OoB) down to 5 in a rack mount server deployment. If you’ve ever cabled a full rack of servers then you’d no doubt welcome this change.

Below is what my adapter config looks like in a recent 2016 HCI environment. All virtual network adapters are connected to the SET-Switch and we use QoS to prioritize traffic.

image

The Logical Switch in SCVMM

image

Much easier to configure right! Smile

Just for good measure, here it the virtual network adapter port profile for the storage NICs

 

If you want to read some good how to’s for building converged networks with a SET switch using SCVMM, check out this helpful post from Charbel Nemnom

 

If your current infrastructure doesn’t support RDMA, don’t worry about it too much. Whilst a correctly configured RDMA network will give your environment a performance boost, it does not mean it will not work without it.

We have a generation of Hyper-V servers running Server 2012 R2 and a Storage Spaces deployment that has been running flawlessly for several years. Even though that part of our environment can’t take advantage of things like SMB Direct, we get awesome performance out of it.

If you have the choice and probably more important, the budget for RDMA, then take the plunge. On the other hand, if obtaining RDMA comes at the cost of something a little more significant, then it’s something you’ll have to weigh up.. etc..

A quick shout-out to Storage Spaces Direct here. For S2D you don’t need RDMA either, but it is highly recommended if it will be your primary production storage. You’ll get about a 50% maximum performance increase.

A few posts here to some non-RDMA demos with Storage Spaces Direct

Installing #S2D without #RDMA

NON-RDMA STORAGE SPACES DIRECT STRESS TEST STEPS and RESULT – #WINDOWSSERVER #S2D #MVPHOUR #STEP-BY-STEP

Running Storage Spaces Direct on HPE ProLiant Micro Servers #S2D #HyperV #WS2016

Happy converged networking!

Enjoy!

Dan

Leave a Reply