Solid State Storage - Where we are heading

Solid State Disk (SSD) has actually been around for many years.  Some of the first 20MB (yes MB) drives started showing up as early as the mid-late 80's.  In those days the application is many cases was focused towards early supercomputer applications from Cray, Amdahl, etc.

The challenge back then was both price and longevity.  Based on RAM technologies these drives had to maintain powered states and really couldn't support one of the key tenants of disk storage which was data integrity over an extended period like you could with magnetic disk - "the hard drive".

In the 90's, Negative-And or NAND Flash memory technology was had been developed by Toshiba in the mid-80's started to see their first applications is what we commonly call today "thumb" drives and SD cards. Designed for camera and PDA applications - NAND showed real promise as a storage media that was inexpensive and could support long term storage.

With early NAND Flash the challenge was scale and the ability to integrate them into traditional storage protocols like SCSI or SATA and also to perform intensive read/write operations, like the ones used in PCs, Servers and Disk Arrays

But that has all changed.  Today, there are three main classes of NAND Flash based SSD available for PCs, Servers and Arrays...

SLC - Single Layer Cell - which is very fast and design for "write intensive" applications

MLC - Multi Layer Cell - which provide more capacity and can support what is called "mixed used" application - so read/write

TLC - Triple Layer Cell - which has dramatically increased capacity and lowered cost.  They are classified today for mostly "read intensive" applications.

The big advantage to SSD over HDD is IOPS or IO per Second performance.  For example - your typical 7200 RPM HDD can support around 80 IOPS.  Moving up the performance curve a 10K RPM drive can support around 120 IOPS and finally the fastest spinning media today the 15K drive - can support maybe 160.

For IT folks - one way to get additional performance for an application was to spread the load across multiple drives or spindles in order to meet a total IOPS spec.  But even to get let's say 10K IOPS from 15K drives - you would need 62 using the 160 IOPS / Drive spec I mentioned above.

SSDs IOPS performance is measured in the thousands.  Many drives have specifications in the 20K - 80K IOPS range or more.  If you move to NVMe or Non-Volitile Memory Express technology that utilizes PCIe bus versus SCSI or SAS - that IOPS number can be as high as 750K IOPS for a single drive.  This is a performance level that simply can't be duplicated with traditional hard drives.

So now let's displace a few myths...

SSD are way too expensive.  So yes and no - it depends on the application.

So for example a 600GB 15K RPM 12G SAS drive today sells for around $700 or about $1.16 / GB.  The equivalent SLC - an 800GB SAS is about $4,700 or $5.85 / GB so a 400+% premium.  The MLC version of the same  800GB drive is $3,700 or $4.62 / GB - still a pretty good premium.  

The TLC however is much less expensive - A 960GB TLC SATA drive is just $1,100 or $1.14 / GB - effectively the same cost as the 15K.  And the TLC provides 80K 4K random read/write IOPS or 500x the performance and with extremely low latencys when compared to HDD

A lot of vendors today position SSD as primarily a caching layer - so read only or read-write and then lower cost spinning disk as the layer for "cold" data.  I argue that that approach really doesn't hold water anymore.  But some say - but I can't get the capacity with SSD.  Well that's no longer true as well.  TLC for example is available in 3.84TB size - which is the same capacity as many storage architects prefer for cold HDD storage - 4TB 7200 RPM drives.  Most avoid 6TB and 8TB models because the RAID rebuild times are forever.  

By the end of 2016 I fully expect to see TLC drive capacities nearing 18TB in 2,5" format with pricing falling under that magical $1/GB range.  And even though you can buy 8TB HDD today - I think that HDD may never reach 18TB.  

Another myth about modern SSD is longevity.  While in the early days - these drives did tend to "wear out" because of cell failures after what seemed to be a limited number of write/read operations.  Today that is not the case at all - in fact most SSD have a higher Mean Time Between Failure - MTBF than their hard drive counter parts.  Most SSD today can be purchased with full 3 year warranties and in most applications will last well beyond that.  

On the opposite side of the spectrum is SSD for Boot Drives.  Today you can easily buy 120GB SATA MLC Boot Drive for as little as $140.  Why would you continue to buy 300GB 10K or 15K drives @ $200 or $370 that can't come anywhere near to OS performance levels 

Finally let's factor in modern Software Defined Storage and the return to Local Drives versus SAN storage.  

Let's face it - traditional storage arrays and SANs are very expensive.  When you factor in HBAs, Multi-Path Software, SAN Edge and Core switches, Disk Arrays, Maintenance etc, all of the advantages that could potentially be gained by the lower cost / GB of storage is lost.  Add to that a modern factor called the IO Blender effect - when dozens if not hundreds of hosts are trying to read/write to the same Disk Array - you can see why there is this movement of getting the storage closer to computer and the return to local drives.

15 years ago when modern arrays and SANs began to appear they were solving the problem that early server based storage could not address and that was scale.  It even evolved to the point of "boot from SAN" and not placing any local storage on a host.  But again to gain that performance factor - you were having to add tray upon tray of disk - either 12 or 24 at a time.  I know of many installations that have hundreds of 15K HDD as the "performance" tier and every IO needs to traverse the storage network and then be processed by the arrays storage controllers and eventually written to disk.

With local SSD that approach is now broken.  If I have an host computer that let's say runs 25 VMs and each VM needs 10K IOPS or 250K IOPS total and maybe 90 - 100TB of storage.  I can do it all on one 2U server with 2 x 120GB Boot Drives and 24 x 3.84TB TLC and completely blow away anything that could be done with an Array - at probably 60% savings.  And also start to think about getting rid of traditional controller based RAID.  Look into SDS technologies to provide your data protection.  Yes - there is overhead - but often the level of protection is much higher and better performing than RAID.

Even if you have an existing array investment - don't continue to purchase 10K and 15K hard drives - purchase TLC instead.  I recently showed a customer how to replace 22 x 2U trays of 15K with 2 x 2U trays of TLC.  The savings in rack space, power and cooling alone pay for it.

Bottom line is this - SSD is here to stay - they are safe, extremely fast reliable and now affordable.  Don't get caught up in oh - they are too expensive or oh they aren't as reliable.  Not any more and soon will blow past HDD in terms of capacity and price.

Comments

Popular posts from this blog

Solar Storage - 2023 Update

ASUS RT-AC68U Router & WDS - a nice solution for a large home.

Home Automation Platforms + Matter - Early Observations