Data Center Networking Trends - 100Gbps & Open Networking

It been a while since I've written about the state of data center networking and here are some new trends, products and approaches I'm seeing.

First is the topology changes.  New data center networks are based on something called the Spine / Leaf architecture.  The spine is typically made up of a group of very high speed switches that connect to a number of leaf switches at top-of-rack (TOR).  This connectivity creates a 2 tier fabric where each leaf switch connects to each spine node.  

Related image

In addition to the fabric itself, several legacy Layer 2 protocols like Spanning Tree (STP) are being replaced with Transparent Interconnect Lots of Links (TRILL) or Shortest Path Bridging (SPB).   

Many data centers are transitioning to this model as they upgrade gear or begin to make the transition to Software Defined Networking (SDN) by adding overlay network capabilities like VXLAN or NVGRE

One design consideration here is insuring that your leaf switches have enough uplink ports to allow connectivity to each spine switch.  Luckily more and more OEMs are increasing the number of uplink ports on their Leaf offerings to accommodate Spine / Leaf design. 

The next new trend is in speed.  Today many networks use 10Gbps SFP+ connections from server to TOR switch, then 40Gbps QSFP+ from leaf to spine.

The industry is now switching to a new standard of Small Form-Factor Pluggable (SFP) format called SFP28.   The big change here is the ability to support 25Gbps over an SFP28 connection or 100Gbps over a QSFP28 connection.  

The mechanical part of this connection is identical to SFP+ so in theory a switch could support 10/25/40/50/100 Gbps connections.  In addition new NICs are coming to market than support the new SFP28 connections.  

Now you think that making this jump to 25Gbps server connections and 100Gbps spines connection might be outrageously expensive.   Well not really and this brings me to next trend.  Open..

When I discuss open - I'm talking about several things.  First is the hardware and the concept of "merchant silicon".   In the past switch vendors achieved a lot of performance and feature gains by developing custom Application Specific Integrated Circuit (ASIC).  Custom ASICs can allow a vendor to tune the ASIC to their specific needs.  

The one challenge however is that custom ASICs can take a long time to design, test and build and due to their smaller volume runs can be expensive.  That cost is passed on to the customer.  For many years those custom ASICs provided the value proposition of one vendor over another.  As the industry has matured many chipmakers have developed "standard" ASICs that can used by many OEMs, much like Intel does with their CPUs.  

The impact is that is is driving down the cost of networking as many of these ASIC do a very good job at a lower price since they are sold at much larger volumes.

The next piece of the Open discussion is software or the "OS" that runs the switch.  Much like the ASIC conversation - OEMs for years developed their own proprietary OS that provided various features or performance that then provided value to customers.   

That has also changed.  Many of the common core networking protocols at Layer 2 & 3 have been implemented in open source projects like switchd, llpd and quagga that are then bundled together with a Linux distribution.  One such vendor in this space is Cumulus Linux.  Others are folks in the space are BigSwitch, IPInfusion, Midokura MidoNet and Nuage Networks.

So think about it this way.  Today most OEM switch OS's are based on Linux at its core with various apps running on top that provide the various features and functions you want.   The difference is that they are developed totally in-house and often bound to custom ASICs to provide that functionality.  This model certainly works but is often expensive and sometimes complex to manage.

But what if I now treated a switch much like I do a server ?  Meaning I buy a piece of hardware that meets my use case and price/performance requirements, then I load any OS I want.  In fact I can change the personality of the switch by simply loading a new OS or app.

To support that model - the last piece of the open discussion is an emerging standard called ONIE or Open Networking Installation Environment.  Think of it like the Pre-Execution Environment (PXE) on a server.  This allows you to boot an ONIE supported switch into "install" mode where it can pull whatever OS stack you want via TFTP and then boot back into switch mode...

And since again these open networking stacks are based on Linux, I could also include tools like Puppet or Chef or Ansible and then provide similar levels of automation and provisioning I have with my servers.   

So there now like 50+ switch vendors out there that support this and include top tiers hardware vendors like Dell, HPE, Mellanox, etc...

It's a very compelling model and one that has several advantages.  First is choice.  Like I said you can change the way that piece of hardware functions by changing the OS or apps within the OS.  Second is price.  These switches are often dramatically lower in cost than their traditional networking OEM peers while providing similar if not better performance.  Many of these new 100Gbps Spine switches and 25Gbps Leaf switches cost less than major OEM 10 & 40Gbps switches.  

Then there is the SDN and overall DevOps automation discussion.  Again since The switch OS is based on Linux and can support scripting languages like Python or tools like Puppet or Chef - the development and networking teams now have a common set of scripts to perform various on-the fly configurations as a part of a continuous integration or continuous deployment strategy.

This space is really starting to gain momentum and I believe will become the norm in 5 - 10 years.  Just like every other aspect of the data center from compute to storage and now networking, the movement is to commodity hardware and open software.  

So something to consider as you plan your next data center.  















Comments

Popular posts from this blog

ASUS RT-AC68U Router & WDS - a nice solution for a large home.

Solar Storage - 2023 Update

Home Automation Platforms + Matter - Early Observations