2012 R2 2016 Hyper-V Quick Tips SCVMM Software Defined Windows Server

Managing a Hyper-V Datacenter – Extra thoughts & tuning..

I will add to this post over time with anything I can recall that has helped in the tuning of the many environments I’ve worked on…

Edited: 30/11 – Thoughts on Dynamic Optimization

Before I continue, as usual I’ll try not to repeat what you can already find out there, so here are some tips that have been helpful over the years. Be sure to investigate whether each of the tips mentioned are valid for your version of hypervisor as a couple of these posts have aged:

Some good advanced concepts can be found on this Hyper-V Tuning blog:
Hyper-V Optimization Tips (Part 1): Disk caching
Hyper-V Optimization Tips (Part 2): Storage bottlenecks
Hyper-V Optimization Tips (Part 3): Storage Queue Depth (This is useful if using legacy SANs)
Hyper-V Optimization Tips (Part 4)
Hyper-V Optimization Tips (Part 5)

Basic tuning. Mostly still applies to 2016:
Top Performance Tuning Tips for Windows Server 2012 R2 Hyper-V

Hyper-V reference material:
Performance Tuning for Hyper-V Servers (2012 R2)
Performance Tuning for Hyper-V Servers (2016)
What’s new in Hyper-V on Windows Server 2016

A very important one to review for any scaled Hyper-V admin is this:
Detecting bottlenecks in a virtualized environment

Good luck!

 

Windows Core

Unless you absolutely need to, run Windows Core. There are a few reasons why you would opt for the GUI but consider them appropriately.

In 2012 R2 we had the luxury of turning the GUI on and off but with 2016, we no longer have this option. This means when you deploy your hosts, the ‘to GUI or not to GUI’ is a fork in the road decision.

Yes you can redeploy your nodes fairly fast these days, and my default position on node management is to redeploy rather than repair. But not all of us have the scale and/or process maturity to facilitate this. So the GUI deployment decision needs a little more attention.

I always recommend core based host deployments. But some smaller clients this has turned out to be a bit of an administrative burden. Which is better for you is only something you can decide, but if you have the skills and/or time to learn PowerShell and remote admin, then definitely aim for Core.

Remember, automation if your friend! If it take you 20 hours to automate a repeatable tasks that takes you 1 hours, then you will profit sooner than you think…

 

Always use validated hardware

You might get lucky with unsupported hardware but I would never recommend it.

Always check the Windows Server Catalog before making any purchase or upgrade.

 

Node quantity & sizing

Having worked with a range of service providers and enterprises over the years, something that has been interesting is the approach to node quantity & sizing.

Below are two examples that come to mind when thinking about inappropriately sized cluster nodes..

Too big – On one end of the spectrum I’ve worked with one client who had a 2 node cluster running all of their workloads. This seems fine on the surface but this provider had quite a significant amount of workloads and thus had two fairly hefty servers. This example was a few years ago now and the nodes had quad socket 8 core processors with 1TB of memory. The problem was that these guys had about 80% of their memory allocated to workloads, making servicing of the nodes quite difficult. On top of that, they had not configured the quorum so when they did lose a node, the cluster quorum was lost bringing down the entire platform. Not ideal..

Too small – At the other end, a service provider I worked with last year decided to run a fleet of what I referred to as mini servers, to build their cluster. Each node was dual quad core with 64GB of memory with 4 making up a 2U rack device. Combining 8 of these devices they we running 2 x 16 node clusters. This was great in the event of a node being put in maintenance mode or potentially failing, but the management of these mini nodes started to add up.

Adding to the complexity was a limited adoption of converged networking resulting in each node having 9 network cables and 7 IP addresses. As you can imagine, cabling was a nightmare!

Prior to gaining access I expected to see a plethora of small workloads running some app service or the like, but when I started to take a look into their average workload. they had all the usual suspects youโ€˜d imagine in your everyday virtualized environment. This sizing approach really baffled me..

Just right – So what is the one-size-fits-all perfect balance? There is none. But before deciding on your hardware acquisition, I thoroughly recommend you consider all aspects of the environment and aim for somewhere in the middle of those two examples.

As much as I want to avoid this topic, something else to consider is Licensing. On the smaller sized environment, they were a service provider using SPLA. At the time of working with them, the magic break even number of VMs per node were in the 8-9 range. Any more than that and your per OSE cost started to come down improving your profitability. At 64GB per node, they struggled to fit more than 9 or 10 medium sized VMs keeping the average cost per instance remained quite high.

Depending on the size of your environment I recommend enough capacity to be able to remove 2 nodes from your cluster without experiencing any performance degradation. Why 2? Imagine you are performing maintenance on a node (patching/testing/upgrading/etc) and another one happens to fail, you should be able to survive this without your workloads being effected. In smaller environments this level of fail-over might not be achievable, but you should ALWAYS have enough resources to be able sustain a single node loss. Always..

 

Memory Management

This is a constantly debated and yet often overlooked topic. And it seems that each Hyper-V administrator that I have discussed this with has a different formula or different process to manage. To be said though, this is almost a moot point in 2016 as the host memory management is much more self efficient, however in the interest of openness, to give you an example of the magic formula (read fuzzy logic) I used on my 2012 R2 cluster nodes..

Note: In the world of virtualization, I like to play it safe with memory and considering it’s fairly cheap these days, I recommend you do to.

Host OS reservation – 5GB for GUI and 3GB Core. Whilst this may appear seemingly high. In this calculation I include things like SCOM, SCCM (if you’re game enough to allow it on your fabric ๐Ÿ˜‰ ) AV, Cluster Service reserves etc.. If you remove all of the fat and be very disciplined in what you let SCOM monitor, then you could bring this down a bit further.

Per Virtual Machine – 32MB. Again, this is quite high. If you review the memory consumption in perfmon per VM, the start-up/config is around the 8MB mark and increases based on the VM memory allocation, but considering we use Azure Site Recovery and VM level backups I prefer to add a little extra for each VM. Without VM level backups or ASR/Replica I would likely halve this.

If using CSV cache, you should take a 1/1 calculation. For example, 4 CSV’s with 1GB cache per CSV equates to 4GB requirement per node.. Make sure you include this in your calculations.

For example:

On a cluster node using Windows Core, where we have 10 VM’s with 100GB of VM memory assigned, the formula looks something like this:

3GB + (32MB x 10) + (16MB x 100) = 4992MB for the host.

In a cluster, you should aggregate all resources and consider failover in your equation. Because all of your hosts are identical in your cluster (not always the case, but for the example we will stick to this), we should add up all resources and device..

For example:

A 4 node cluster where we have 40 VMs with 400GB of Memory, we should calculate our requirements on maintaining optimal performance during a node failure.

3GB + (32MB x (40 / 3)) + (16MB x (400/3)) = 5633MB for the host.

The above calculation is only a guide and you should review how you assign resources thoroughly. Be ultra careful when you over subscribe your compute, especially when it comes to Memory.

i.e. if you are running at full capacity on a cluster and a node fails, then the VMs that get redistributed from the failed node might not be able to startup if not enough Memory is available.

How to apply your theory, use VMM to define host reserves. This can be set on the Host Group or Host itself. I would like to see this feature be applicable to clusters, but alas it is not..

Here is a rule being applied to a Host Group

 

Here is a rule being applied to a Host

Note: If you are applying to hosts, I thoroughly recommend you do this as part of your deployment process with your automation tools. Over time the calculations might evolve so in this instance you should use PowerShell to consistently apply to your hosts.

As always, keep track of your hosts utilization with the various tools and metrics available.

This will vary greatly based on your environment but hopefully this will give you something to think about….

 

Trim some fat

There are many things that can be done to trim down the OS of the hyper-visor, but here are a few considerations:

Remove Windows Defender – controversial! ๐Ÿ˜‰

Remove-WindowsFeature Windows-Defender-Feature -IncludeManagementTools

If removing Windows Defender is not an option for you, a good post by Geert van Horrik on how to add the appropriate exclusions can be found hereย here

Disable the Print Spooler – trivial but why not ๐Ÿ™‚

get-service -Name Spooler | Stop-Service
get-service -Name Spooler | Set-Service -StartupType Disabled

Remove SMB1 if it’s not in use (Windows 2016 only)

Remove-WindowsFeature FS-SMB1

Will expand on this as I review my deployment notes.

 

Dynamic Optimization

Using Dynamic Optimization is great. But you should consider a something first…

Do your Live Migration NIC’s have RDMA capability? RDMA is important for Live Migrations as it will result in shorter handover times from host to host. Without RDMA on your Live Migration adapters, you could experience a ever so slight pause of the VM.

We have many RDS farms on our clusters and in earlier builds the handover was more noticeable, still extremely slight, but would occasionally result in the user RDS session drop and reconnect. Because of this, we excluded those VM’s from the migration tasks which worked perfectly for us.

In recent years the drivers and firmware seem to have improved, and even though those cluster node NIC’s do not support RDMA, the handovers seem to be better.

I suggest using Dynamic Optimization as a default. Always review and test the settings that work for you, but this is a great feature and it works well. Just as long as your underlying fabric is configured properly ๐Ÿ˜‰

 

Enjoy!

Dan

Leave a Reply