Updated in Mar 2023: Ignore the May 2022 update, I was right all along. Microsoft actually change the default VMQ in Azure Stack HCI deployments, so that says all I need to know.
**** ALWAYS change base processor to core 2 (or 1 if HT is not enabled) **** Updated in May 2022: For about a year or so now since MS made some tweaks and a growing confidence in the out of the box settings, the optimal position has changed on WS2019.
Excited to be speaking at Microsoft Ignite The Tour in Sydney. For those coming along, below are my sessions and demo times slots… If you see me make sure you come up and say hi!
[table id=5 /]
From time to time I have the requirement to deploy/update a basic app or run a script on my Hyper-V hosts.
This is relatively simple using remote PS session or if you have SCCM managing your fleet. But sometimes you need to just push something out fast and luckily SCVMM can assist.
My use case is to run an update for the Microsoft Site Recovery Services Agent on my hosts. I would do this periodically.
WORK IN PROGRESS Editors Note: this is still a working document as my priorities have to be on other work right now, but in the interest of sharing I have made this available now… If you find any issues or errors let me know - thanks for reading!
There are many S2D build blogs out there and I don’t want to just add to the list but given I’m doing this build with SCVMM and SCOM integration I thought I’d run through the additional steps.
Error: “VMM cannot use [Logical Switch] to create a virtual switch a there are no uplink port profile sets present on this logical switch”
Environment: SCVMM 1801.
Symptom, right click on a Hyper-V host and after a couple of seconds the below error. After clicking OK, VMM crashes.
The fix that worked for me.
First check the host group of the problematic hosts.
In this deployment, they live in host group “Tenant HCI”
A few months ago we placed an order for some slick HPE Gen10 hardware to replace our existing storage service. Our trusty Server 2012 R2 Storage Spaces with DL360 Gen8’s and DataON enclosures has served us well, very well, but it’s time to move on to the latest and greatest…
“The SANKiller”
At the time of writing this (10 of April 2018), although they’ve very recently added 2 more SFF configs, HPE have not published a WSSD certified solution for Gen10 LFF chassis.
An interesting solution brief of Hyper-Converged Infrastructure from DataOn got my cogs ticking on a topic that has bugged me for a while… All-Flash storage.
Firstly, this DataON solution is a seriously slick contender… a 3M IOPS 4-node HCI in 8U. The tech geek in me would be quite pleased to have that bad boy driving services on my platform.
But is it necessary?
Ultimately, that is a question that only you can answer based on your requirements, but after deploying several Hyper-Converged Infrastructure (HCI) and Storage Spaces Direct (S2D) solutions from various vendors over the last 12 months, I wanted to share my thoughts on this All-Flash phenomenon.
I will add to this post over time with anything I can recall that has helped in the tuning of the many environments I’ve worked on…
Edited: 30/11 - Thoughts on Dynamic Optimization
Before I continue, as usual I’ll try not to repeat what you can already find out there, so here are some tips that have been helpful over the years. Be sure to investigate whether each of the tips mentioned are valid for your version of hypervisor as a couple of these posts have aged:
I have been wanting to put a series together for a while now on the many aspects to consider when running a Hyper-V platform. So here we are..
As usual I want to avoid step-by-step installs as there are plenty on the internet, so I will link the useful ones and comment on any variances I recommend or points that need a little more consideration.
This series will be organic and I will evolve over time.
Hi all, this has been a long time coming but I wanted to share my thoughts and experiences on virtualization, Hyper-V and blades.
Before we get started, I wanted to put this in context and give some reasoning as to the purpose of this post. Over the last 5 or 6 years, I’ve been closely working with many service providers and enterprises in designing, configuring and supporting Hyper-V and System Center environments, and an interesting and often polarising discussion topic that regularly surfaces is blades vs racks.