The Modern Server - back to the future

For nearly 25 years now - the Intel based x86 server platform has been the basis for modern distributed computing.   

From the early Compaq System Pro in 1989 and Proliant in 1992 the concept of combining a single, dual or quad processor motherboard along with a drive bay capable of supporting pluggable hard drives with RAID and dual power supplies became the base platform for the growth of operating systems like Windows Server and Linux.

By the late 90's as the numbers of these servers exploded and the storage requirements started to grow beyond the capacity of of the drive bays - there was a movement to consolidate storage onto large scale arrays and create a storage area network (SAN) to provide access from servers to those arrays.   

All throughout the last decade the SAN grew in popularity and customers spent millions on the arrays and the associated networking gear.  The challenge was - SANs are expensive - easily double and triple the cost of the same capacity that you would have gotten with direct attached storage (DAS).   

SANs can also be very complex to operate - with zoning rules, multi-path, replication, etc..  to the point where in many Corp IT shops a whole group dedicated to storage management was created.   It even got to the point where companies were promoting boot from SAN where if they could they would eliminate local storage from compute.

But in recent years two things have changed.   First is the growth of capacity in the 3.5" drive format.  Today - you can buy 6TB is a single drive - which was unheard of just a few years ago.   8TB even 12TB are on the horizon running at 16Gbps throughput.

That means even in a simple 12 drive 6TB config on a 2U server - you can have 72TB of raw storage.  I'm old enough to remember when a 72GB drive was considered big - and so you now can do in 12 drives what it would have taken 1000 drives to do.   

Second is the solid state disk or SSD.   These drives in SLC, MLC or TLC format have absolutely revolutionized performance.  System architects use IOPS or IO / per second as a guideline for application performance.   With a traditional 7200 RPM SATA or SAS drive that is typically 140 - 160 IOPS per drive.  So to get higher performance you would have to stripe your data across multiple drives. But you often maxed out at 1400 - 1800 IOPS per stripe.

In comparison a single SSD drive can have 20,000 - 90,000 IOPS.  You can not create a traditional SATA/SAS HD stripe big enough to come anywhere close to that performance.

The challenge is that SSD's today do not have huge capacity and are expensive when compared to traditional HD.   Additionally there was not good quality software that would leverage a combination of SSD and HD on an individual server.   

So the first place you started to see that hybrid SSD, HD combination was on the large arrays and the concept of data tiering was introduced.  Data tiering allowed the array to examine your data and place it on either SSD or HD based on how often it was accessed.   The goal being to place your data on the highest performance tier then move it down to cheaper, slower HD when it aged.   That concept was even moved to the write side of the application - whereby all data that is written goes to SSD first, then as it ages gets moved to to lower tier.

There is also been an explosion in "All Flash" arrays to provide massive IOPS levels to specific apps.   But I've found in most cases that it is difficult to find many traditional business apps that can run anywhere near the level of IOPS that these devices provide.

But now all of those concepts are returning to the server itself.   While SSDs are not necessarily new to servers - the format, number, speed and software has.
When SSDs first started to become popular in servers - they were often used for first - booting the OS and then second acting as a read cache to increase performance.   Traditionally you could maybe put 2 or sometimes 4 SSD in a server.

Today however - you can buy servers that support the new 1.8" SATA SSD as well as 3.5" high density 6TB drives and software that supports read and write caching to explode performance.   My company - Dell for example sells a model with 18 x 1.8" 960GB SATA SSD drives + 8 x 3.5" 6TB SAS HD - for a total capacity of 68.5TB and nearly 400,000 IOPS.   These types of capabilities in a single server were not available 3 years ago.   Not even close.   

Combine that with the new Haswell based motherboards capable of supporting dual 18 core processors, 1.5TB of DDR4 2133 RAM and multiple 10Gbps and eventually 40Gbps Ethernet and you now have single servers capable of either A) supporting hundreds of VMs, B) entire production SAP or Oracle Apps, or C) huge Exchange mailbox server roles.   

Just a few years ago a configuration to support these kinds of apps often involved server "farms" of anywhere from 6 - 10 servers along with SAN.   

Now certainly with this kind of storage density and performance on a single server you still have to address operational aspects like backup, HA, and disaster recovery - but I argue that you may actually have more options using a server model than a SAN model...

Bottom line is this - if your server farm is 3+ years old and connected to a traditional SAN at 4 or even 8Gbps - take serious look at the latest generation of servers and seriously consider moving to these hybrid SSD,HD configurations and reducing you reliance on SAN.  You'll see much higher performance to $$ than you would with SAN based options.






Comments

Popular posts from this blog

ASUS RT-AC68U Router & WDS - a nice solution for a large home.

Solar Storage - 2023 Update

Home Automation Platforms + Matter - Early Observations