Mythbusting: The truth about blades…

Hi all, this has been a long time coming but I wanted to share my thoughts and experiences on virtualization, Hyper-V and blades.

Before we get started, I wanted to put this in context and give some reasoning as to the purpose of this post. Over the last 5 or 6 years, I’ve been closely working with many service providers and enterprises in designing, configuring and supporting Hyper-V and System Center environments, and an interesting and often polarising discussion topic that regularly surfaces is blades vs racks.

image

Debate has been raging on for years, and to be honest there will never be a one size fits all answer. Over time we’ve seen various blogs out there giving blades a bit of a beating, I just wanted to weigh in and give some perspective on the debate. This post is not a ‘blades are better than racks’ post, but is intended to help you weigh up blades with information based on some fairly solid industry experience and, let’s face it, my not-so-humble opinion… 😉

To set the technical scene, the assumption here is that you have a fundamental knowledge of what blades are and the basic differences between them and rack servers. Ok? Done.

Last thing before we get started, is that I will try to keep this vendor agnostic and even though my experience ranges across most blade vendors, my production environments are HPE so I may stray towards them as I reference tech specs. That said, I know that these rules mostly apply to at least Dell & Cisco (my Lenovo experience is limited) as well.

Density (and cost)

Scale. Cram them all in as tight as we can! Ok, no. That is not the goal here. As any cloud designer knows it’s about how many services you can run. This is nothing new, but I believe this may be often misunderstood, or used to misrepresent the truth.

What? Why? Great questions…

As an experienced service provider and Hyper-V veteran, trying to accurately quantify how many services we can get out of a resource is very difficult and typically based on recommended estimates or quite often an educated guess. I’m sceptical of anyone who can put a definitive number on how many services they can accommodate, before deployment.

Ok, maybe I’m being a little stubborn there, but I am yet to have a discussion on this topic that does not come down to a limit that is based on recommendation, not a reality.

I’m starting to ramble a bit here, but let’s seriously take a look at what we know. We know that with Server X, we get 16 physical cores and 512GB of memory. Arbitrary number but humour me.. Whether that is a rack or a blade, the specs do not differ. A core is a core, and memory is a… um… What was I talking about… oh yeah, memory…

No matter how you frame the output, whether it be in servers, services or numbers of flashing lights, the fact still remains that density plays a significant role in the equation. That all important role is not only limited to how many gigawatts we can pump out, it’s also how much does it cost to maintain and run.

Another hot topic these days is power. So let’s pull out Excel and crunch some numbers…

Using my production environment as an example, each blade is averaging (during peak time) 185W. To run a fair comparison, each rack server of the same vintage (DL360 gen8) running a similar spec under similar (marginally less) load is almost identical in power consumption at 196W average. So on a 1-1 comparison in my environment a 1U rack is using roughly the same power as a single blade with slightly less workload.

Each blade enclosure has other components that need power so one of our fully decked HP c7000’s with BL460c Gen8’s currently runs at about 4kW during peak production. In a world where you are oversubscribing more aggressively than we do, expect the power usage to increase, but the same rule applies for rackmounts.

So let’s fill up some racks for a comparison based on raw compute (You could leverage Hyper-Converged (HCI) with racks but because storage takes a significant slice of your processing power, to be fair let’s stay with raw compute):

Rack (1U) option – 35U

32x HPE DL360’s, 1x OoB switch and 2x ToR Switches = approx. 7kW

Blade option – 22U

2x c7000 with 32x BL460c and 2x ToR switches = approx. 8kW

I’ve grabbed the average power consumption from our ToR switches as a guide.

It’s a common statement that blades have better power consumption than rack mounts. On a single blade vs rack server comparison, in my environment that statement holds up. To that though, blades have the overhead of the enclosure and it’s components to consider which bring their own level of redundancy, so the direct comparison above shows slightly better efficiency in the racks. Even though there are more things to consider on the racks, let’s use this rough calculation based on a power to performance ratio, and the racks win out by roughly 12.5%. But let’s look at the amount of space required to save, 12.5%. That’s about 44% more real estate required by rack servers to run the same workload. That’s the really interesting point.

So if the capacity requirements of my ‘services’ equated to somewhere between 39 (adding 7 nodes to fill up the rack) and 64 nodes (4 blades enclosures), the power saving for the rack servers would be offset by only needing 1 rack to perform the task. To throw a rough price on that, in Australia that’d be about 2.5K per month for a single rack. Obviously that comes down with scale, but that’d be a very rough saving in the vicinity of $25-30K per annum… Cost that out over a 4 year period (not the same for everyone, but again, to have a baseline) by opting for blades we’d save approx. $100K per rack.

I’m no accountant. That’s for sure! And the numbers were very rough. But they were thrown together to give you something to think about. They might be totally out of line for your implementation but definitely worth reviewing.

My take on this? At a certain point, blades allow for greater compute density which directly translates into greater service density.

Management

Now this is often a hard one to quantify from a cost perspective but I cannot stress how important this is as a factor in this decision. If there is anything my current employer has taught me is that you must put a high value on your time.

Our platform is essentially 3 generations of technology, two of the rack varietal and 1 of the blades. The time we have spent supporting and maintaining each generation was not too dissimilar in the two rack generations, but when it comes to our blades kit, the effort in management here has been slashed to a fraction of the rest of the environment.

As an example, each of our rack servers have 7 external cables (4x 10GbE, 1x OoB & 2x power) to route. For 16 servers that’s 112 cables. Our blade fabric typically has 16 external cables (8x SFP+ uplinks, 2x OoB & 6x power) in total. As much as I enjoy a long night of intricate rack cabling, I know which I prefer to manage.

In our world, we deem a fault domain as a blade enclosure. I know Azure use a rack for this but we don’t have that scale. Each server in the fault domain is identical in every way possible. This makes for some very scalable automation.

Yes, the same argument could be said for multiple rack servers, but the level of automation you can get stops short of what you can achieve with a modern blade fabric. Have a look at HPE’s OneView integration with SCVMM, it’s outstanding. Mix that with Service Management Automation (some examples here & here) integrated into your Azure Pack and you have the makings of a truly scalable private cloud – like mine 🙂

From an engineering perspective, if you’re ever lived the life of a blade fabric management with the likes HPE, Cisco or Dell then you will understand the benefits of using such technology and have a good appreciation of the management efficiencies that come with it. At scale, this is extremely important.

From a business perspective, blades reduce management overhead which increases operating efficiency. Operating efficiency improves return on investment.

From my experience, I believe it’s undisputable that blade management significantly wins out over rackmounts…

Vendor lock-in

The subject of vendor lock-in always comes up as an argument against blades, but I honestly do not see that as an issue and I will hopefully explain my perspective.

As mentioned, we’re a primarily a HPE house and use HPE kit pretty much all the way through our core fabric excluding security and routing appliances. If I take a look through all of our generations, apart from the SAS HBA’s in our Storage Spaces cluster, we are using HPE branded across the board.

Does this mean we miss out on leveraging technology like InfiniBand or RoCE from Mellanox? Not at all. As an example, there are a plethora of HPE branded Mellanox interface cards to choose from and the beauty of using ‘vendor locked-in’ hardware is, if we have any issues, we can go straight back to that one vendor. More on networking a bit further down.

Ok, in rack servers we have more choice and could go non-vendor aligned interface cards, but do you really need to? That’s another vendor, another set of firmware and drivers, another list of compatibilities etc. Is it worth the admin overhead to bring more vendors into play? That’s really only a decision you can make, but to share our experience of having our environment top to bottom supported by one vendor, this makes life fairly simple on the rare times we have to give them a call.

In an environment where your workloads require access to custom interface cards, i.e. for Skype servers or a serial port, then blades might not be suited to that specific use case.

Just something I’ve noticed, when discussing blades vs racks with clients or partners, the topic of vendor lock-in typically is only a concern of the smaller environments. I completely understand that mentality. But if you’re seriously considering a blade enclosure as your entire virtualization estate, then there are definitely other factors to consider as well.

Storage (SMB Direct)

Now if you’re a Hyper-V shop then you’ve no doubt heard and likely gotten excited about Microsoft’s two iconic flavours of storage – Storage Spaces and Storage Spaces Direct. If so, then you’ve also likely discovered that magical tech-spec of performance – RDMA.

It’s often believed that RDMA is not supported in blade fabric and you would miss out on this tech spec if entering the blade world, but this not quite accurate.

As an example, Mellanox and HPE have been working on joint solutions for a long time and this plays a part in the work of blade networking as well. The link provided is for the fairly recent Gen 9 series but this partnership ventures back even further with RDMA via RoCE capable switching available for Gen 8’s so this is nothing new. I suspect there are solutions with earlier generations (the HPE reference for InfiniBand doc was originally published in 2006 to put some perspective on it) but that’s not really relevant if you’re trying to decide which way to go now…

To give a baseline though, all of our production storage is Storage Spaces or Storage Spaces Direct using RDMA, but our blade fabric is not RDMA capable and the performance is still superb for the VM workloads traversing the Ethernet network.

Ideally, I would have liked to have RDMA capable fabric in the blade but for other reasons this wasn’t an option at the time. But how much has it hurt us? Let’s look at our System Center SQL cluster for a quick snapshot at the performance.

This SQL cluster node is home to a range of instances and databases, one being our primary multitenant Operations Manager (SCOM) DB with over 600 agents connected. As you can see from the below, the response times (yes, just one of the varying performance metrics to review) of the OperationsManager DB is 7 ms and well within the acceptable ranges of performance. Note, I took this snip at about 11:45 am on a weekday which is in peak operating times and the average response time was hovering around the 4 ms mark…

clip_image002

Now all the sexiness around SMB storage aside, there is nothing stopping you from utilizing legacy SAN options using iSCSI or Fibre. In fact, each blade vendor typically has some excellent SAN integration options if this was still more suited to your needs. Most of these options work well with SCVMM but I recommend you do your homework to ensure that your vendor supports SMI-S. Many vendors have their own proprietary management software and they want you to pay for it, so I find the support for industry standards is often a little behind their own…

I obviously left the legacy world of expensive proprietary SANs long ago, but I do recognise that they still have their place in our society.

Note: Cisco UCS VIC (1340 and 1380) are not supported by SDDC Premium. If you want Microsoft SDNv2, Cisco blades might not be the best choice for you. Dell have Networking options that include Mellanox but I have not confirmed their support for ConnectX-3 Pro/SDDC Premium. I’d welcome some comments from the infrastructure community, someone out there may have some valuable experience to share on this point.

Hyper-Converged vs Disaggregated

Just a quick shout out to Hyper-Converged Infrastructure (HCI). I will not go into the discussion on HCI vs disaggregated as that is a whole other discussion, but thought it worth mentioning that HCI architecture doesn’t really align with blades due to the obvious physical limitations in disk capacity.

Typically, each blade is limited to 2 hard disks for the OS, so if HCI was your flavour, then blades likely aren’t a consideration.

Networking (and RoCE)

RoCE (RDMA over Converged Ethernet) works with blades. Same options as a rackmount. Spec the appropriate Mellanox NIC’s and compatible Switches. Done 🙂

… not quite.

Over the years, there have been questions as to whether a blade vendors networking fabric is supported by Microsoft. That to me is a very simple question to answer, is the selected hardware on the Microsoft Windows Server Catalog (WSC)?

Let’s take HPE again as an example (sorry Dell, Cisco etc), if you were in the process right now of choosing your blade fabric, the BL460c Gen9’s have a range of choices when it comes to the network fabric, one of those being the afore mentioned HPE and Mellanox joint solution.

Taking the NIC mentioned in the above article, the HPE Infiniband 544+M, we head on over the WSC and validate if it is actually supported by, firstly Windows Server 2016 and if heading down the Microsoft SDNv2 path, the SDDC Premium. More on the Microsoft SDDC program here but for reference, the SDDC/HCI Standard and Premium feature list is below

clip_image004

That all said, most of this is irrelevant if you do not want SDN or shielded virtual machines as the default LOMs (for HPE at least) are supported by SDDC/HCI standard. So again, networking is somewhat a moot point.

Another pro argument for the racks is more choices in NICs, i.e. Chelsio etc. I absolutely agree there is a wider range of choice, but in a Hyper-V environment, why would you? Azure uses Mellanox, Azure Stack uses Mellanox, so in the world of Hyper-V, the debate ends there…

Edit: HPE Synergy systems are supported by Windows Server 2016 SDDC and also satisfy the management efficiency of a blade fabric. The HPE Synergy solution do bring some new features of composable infrastructure, but from a compute perspective they are essentially blades V2. And although they don’t have as many of the core fabric options as the c7000 with BL460’s, they still have compute, network and storage making them perfect for a Hyper-V 2016 with Microsoft SDN deployment.

Virtual GPU

Another discussion that is common these days is that of GPU’s being made available to the virtual machine for heavy graphic workloads.

Part of the feature set of RemoteFX is a virtual GPU presented to the VM. This is greatly enhanced by the underlying host having access to a dedicated physical DirectX11-capable GPU. Much like the HCI discussion, due to the limited real-estate in each blade added GPU options are limited.

Taking a look at HPE blade options, there are a range of choices for GPU focused workloads. We can choose between Mezzanine GPU’s or the dedicated WS460c series blades that offer expansion bays allowing you to add dedicated GPU’s.

If heavy graphics workloads are a must for your environment and the array of options available from your blade vendor do not suit you, then maybe this could be the differentiator that sways you over to rack mounts.

Don’t forget that racks vs blades challenge is not a fork in the road decision. Far from it. As an example, there is nothing stopping you running blades for your core server fleet and then having rack mounts with juicy GPU’s for your VDI solution.

So, which one wins?

After all the above, what is the best solution, racks or blades? Well… “it depends”.

What a letdown. You thought I was going to save you all the hard work and make that all-important decision for you? Well, I wish I could, and I am more than happy to guide you towards the right decision. But the reality is that which solution is best for you depends on a long list of many other factors, some of which I touched on above. Somewhere in there is the right balance but that needs to be worked out to fit your service catalogue.

To give you some guidance though, the whole reason I have been motivated to go back to this debate is that we are currently going through the process of defining the next generation of our environment, and thus having the exact “blades or racks” debate right now.

At the moment, I am in favour of blades for the majority of our environment as that suits us. But only time will tell if we do head down the HCI path or maintain the disaggregated model for our primary production platform. We’re also in the process of acquiring an Azure Stack scale unit so in a way, we’re already on the HCI path.

Wrap up & feedback

The racks vs blades discussion warrants more than I could fit into a single blog, but I’ve tried to at least provoke reflection on some of the key considerations. I may have missed something, or you may have a different perspective, so please feel free to comment or provide feedback. After all, the intent here is for us all to have as much information as we can to make those informed decisions.

If my experience, insights and rants help at least one person make a more informed decision, then I reckon that’s worth the effort.

Happy SDDC’ing!
Dan

Leave a Reply

Your email address will not be published. Required fields are marked *