Software Defined Networking - the next major disruption in the Datacenter

Software Defined Networking or SDN has actually been around for several years now - but new products and solutions are making it more and more attractive and affordable for mainstream data centers.

First a little history on data center networking and how we got to the current state.  

For many years a three tier architecture was the standard for data center design.  There was the access tier - more commonly known as the "top of rack" - where physical servers connected, then there was the aggregation tier - often at end or middle of row that consolidated the access layer switches and then finally the core tier - a very powerful set of switches that collapsed all the aggregation switches and typically provided layer 3 routing services.  Depending on the size and complexity of your data center - some folks would connect servers directly to the core and other would go from access to core - but the concept was pretty much the same.

All of these switches regardless of vendor were proprietary in nature - meaning the vendors typically used custom ASICs in the hardware design for performance and then their own OS which provide the features and capabilities for any one tier.  Add to that architecture the various L4 - 7 features provided by appliances from firewalls to load balancers, VPN, etc, etc and that was your network.  We'll keep Storage Area Networking (SAN) out of the conversation for know - but effectively it followed a somewhat similar architecture.

Then compute virtualization came about and things started to change.  The first and biggest change was that top of rack (TOR) networking effectively moved from the physical switch itself to the hypervisor.  Now - VMs connect to a "virtual" switch - basically a piece of software that abstracts traditional layer 2 networking.  A Host machine could have multiple virtual switches supporting multiple virtual NICs defined for a VM.  This would allow VMs that needed to speak to each other to never have their packets leave the hypervisor.  

That is until you needed to add those L4 - 7 services.  Since those services were often provided by dedicated appliances - packets would often have to make various trips in and out of the hypervisor to the appliance and back.   Eventually that also changed in that many L4 - 7 appliance vendors began developing virtual appliances that you could load and provide those services inside the same host.  But that approach meant consuming VM host resources for the various virtual appliances and also there was no centralized management.

Additionally as virtualization began to mature to what we describe as "cloud" - where VM mobility was key - there then started to become constraints in how the network outside the hypervisor could support a VM that just moved from one host to another.   Often moving a VM from one host to another meant it was suddenly on a different subnet and how would it communicate with other host or services it needed.  Add to that the potential impact of those L4 - 7 services again and suddenly there was a level of complexity added that made this flexibility difficult.

One approach that addressed the complexity was the concept of the Network Virtualization Overlay (NVO).  Basically NVO provide a network within a network - where the the packets for the virtual network are encapsulated in the physical network packets.  Two very prominent protocols came out of this approach - VXLAN and NVGRE. But even with these tunneling protocols - there needed to be a method for consolidating the control of the network and separating it from the data transport of the network.  Often you had to manage each switch individually and there was no one control center for networking policies that told the physical and virtual switches what to do.

Today - that has changed.  There are several NVO solutions out there - one of the most well known is VMware's NSX.  Coming from the acquisition of a company called Nicira - NSX not only provided basic layer 2/3 NVO services - it also address many L4 - 7 capabilities with firewall, load balancing and remote access.  it also brought forth the concept of policy based networking with a "controller" that told all switches what to do.

The next change came with the advancements in "merchant silicon" or the actual network ports.  No longer was there a huge advantage is spending a premium on a custom ASIC based design.  In effect networking like all other aspects of modern computing became a commodity.  Along with this commodity hardware came open OS projects based on Linux.  Linux is not new to networking in fact most proprietary network OS have a Linux/Unix core.  The difference however is in price and flexibility.

These new "open" switches do something that was unheard of 5 years ago - and that is the ability to run different OS for different use cases.  These switches support a "standard" called ONIE - or Open Networking Install Environment. Think of ONIE like PXE - so the ability to boot a raw piece of hardware and then have any OS you want installed on it.  Many of these open OS already have VXLAN or GRE support built in and do one very simple task - they move packets - very quickly.  They also have the ability to be "managed" or controlled by the same NVO controller that is managing the virtual environment.  

There are several of these open OS vendors out there - Cumulus Networks for example or IPinfusion.  The combination of these two pieces - merchant silicon switches and open OS are dramatically driving the cost of data center networking down and providing tremendous flexibility in what the switches can do.  And the cool thing is - if you want to change the OS - just boot the switch into ONIE and install a different OS.  So much a like a server today.  And since the base for these OS is just Linux - you can also load additional projects like Puppet or Chef if you want to perform any level of orchestration you need.

Finally the overall architecture of networking has changed from the tier model to the spine/leaf model.  In Spine/Leaf - the Leaf provides the access layer and acts in a similar role to TOR in that it is the first physical connection from a host. Each Leaf switch connects to a mesh of spine switches - this could be two or even 4.  In Spine/Leaf traditional HA protocols like STP and replaced with Transparent Interconnection of Lots of Links (trill) or SPB Shortest Path Bridging.  

When you combine a modern network with software define storage (SDS) hyperconverged compute/storage appliances you now have the makings of the Software Defined Data Center (SDDC).  So concepts like Infrastructure as Code (IaC) now become viable.   

So some of the traditional networking vendors make the argument that by implementing NVO your effectively paying for the network twice - once for the physical switches and OS and again for the NVO licensing.  Well in some ways they are right and that can be expensive - especially when your buying their proprietary products.  But I argue that if you move to an open switch model - your total cost will be less than just buying their gear alone.  

I truly believe that SDN and Open Switches will be the next major shift in data center design and that you need to start looking to see how to take advantage of these new capabilities.

Comments

Popular posts from this blog

ASUS RT-AC68U Router & WDS - a nice solution for a large home.

Solar Storage - 2023 Update

Home Automation Platforms + Matter - Early Observations