HyperConverged Infrastructure - Hype or the future of the datacenter

In the last few years - the term hyper-converged infrastructure along with Hyper-Converged Infrastructure Appliances (HCIA) have started to appear on the market.  

So what is hyper-convergence and why should you care...

First let's define hyper-converged.   It's really two pieces..

First the hardware.  Most hyper-converged solutions on the market today are based on traditional 1U and 2U rack servers with a combination of SSD and HDD drives or all-flash.  

Some vendors increase the density by using the newest 4 in 2U sled designs which have 4 compute sleds and 24 x 2.5 drives or drive sleds that can support 16 drives each.  While some support 1Gb NICs, most of the higher end models all support 2 x 10GB per compute node.

Next is the software - which is the secret sauce.  The software came from the web-scale companies out there like Google, Facebook, etc.  The software which typically comes in the form of a VM or a Hypervisor Add-on, performs two key functions. 

First it abstracts the storage from the Hypervisor (VMware, Hyper-V, KVM).  The hypervisor is presented volumes via NFS.  The controller or client VM as it is sometimes referred to then acts as a proxy for all disk IO.  

Second it provides high availability and recover-ability via replication of data across the pool of storage.  So for example as an app "writes" data - the controller VM writes it to the primary volume as well as a secondary store on another node.  Depending on the vendor you choose some also allow a 3rd or more replica to be stored at a remote site.

The approach to building a hyper-converged infrastructure is to use a building block approach.  Each block or "node" provides compute/storage to support virtual workloads.  You always start with 3 nodes.  In addition you have to factor in that whatever raw storage capacity a node has - let's say 40TB - it will only have 20TB usable - since the other 20TB will be keeping replica data from the other nodes.

After the initial 3 nodes - you can add one or more as needed until you reach the operational limit of the solution.  With some vendors that limit is marketed as infinity.

Okay - so know that you understand the basics of hyper-converged - what's the big deal ?

As I stated earlier this technology came out of the web-scale companies like Google & Facebook.  At their scale - they experienced a couple of challenges..

First was scale - Traditional SAN infrastructure just couldn't scale to the level they needed. There is a concept called the IO blender when you try and scale traditional SAN and hundreds of hosts are connecting to the same array - the controllers become the bottleneck.  Hyperconverged solution place the data very close to compute and therefore typically have better performance.  Additionally adding a node adds compute and storage at a consistent price.

Second was cost - Anyone who has implemented traditional SAN infrastructures understands the costs, between HBAs, Edge and Core switches, Software, and the Arrays themselves.  Hyperconverged solutions use commodity devices - rack servers, SSDs and HDDs.  In fact there is no RAID for data protection and therefore no RAID controllers.  You typically use SAS/SATA Passthru controllers and JBOD. The software handles all the high availability and data protection.

And most importantly there is no SAN.  No HBAs, No CNA, No SAN Switches and No Arrays.  

Modern SSDs especially NVMe PCI SSDs can provide hundreds of thousands of IOPS performance per node.  So suddenly your scaling into the millions of IOPS of performance at a price point that no array based solution can touch.  

The end result is extremely high performance and availability at a predictable price and operations model.    That is why you need to care about hyperconverged - it is a game changer in the datacenter.

In fact I will make a prediction here that within 5 years - SANs as we know them will probably fade to black.  Especially when the new SFP28 based NICs that support 25/50/100Gbps become common place.   In addition all-flash versions of these solution will become common place.

Today - there are several players in this space including VMware with vSAN Ready Nodes, Nutanix with their NX and XC appliances as well as software plays like ScaleIO from EMC.  Each of these solutions follow a similar architecture but vary in how they are marketed, procured, licensed and supported.

If you combine hyper-converged compute/storage with SDN and Spine/Leaf networking it is entirely possible to build that SDDC ready infrastructure to support DevOps.  

Bottom line is this technology is very compelling and very disruptive and it works.  Do some due diligence and stick your toe in the water.  Learn how it works, what works best for your use cases and you can really simplify your infrastructure.  







Comments

Popular posts from this blog

ASUS RT-AC68U Router & WDS - a nice solution for a large home.

Solar Storage - 2023 Update

Home Automation Platforms + Matter - Early Observations