The future of networking is software

I believe the future of networking is software. To be a little more specific on my thoughts, I believe the future of networking in the data center is software-based. Water is wet, and on that note, thank you for coming to my TED talk. And, quick PS: clouds are also hosted in data centers.

While what I just mentioned maybe obvious to say—given how pretty much everything has its future in software—you would be surprised at the state of the networking industry if you came to it as an outsider. The cloud has already happened and compute has embraced it with both arms,… and legs, and tentacles, and whatever else you have available. Storage is also not too far behind. Cloud-scale storage technologies are becoming closer and closer to being commoditized1. Network—in a way—is the last frontier in the data center not yet fully subscribing to cloud ethos.

Things are certainly changing—more on that later. But, networking vendors are still trying to sell you pricey switches, they are only just now getting serious about DevOps, and their idea of a software-defined network (SDN) is to run a bunch of software on top of their existing traditional network solution stack (i.e., ASIC-driven expensive boxes + mostly-blackbox NOS/firmware + ridiculous licensing + exorbitant professional services + consultancy that makes sure you stay with them).

Forgive me for sounding cynical, I am not. I think things should be better though. To that end, let me share a few things and sort-of-predictions I believe of the future, at least at this point in time. However—fair disclaimer—I may have a vested interest in this game. Therefore, take whatever beyond this point as however you will.


I believe the future of networking—at least, networking in the data center—is in software. Networking in general is gradually heading that way. Telco industry has learned from what happened with the “cloud fad”, and are trying to move towards NFV2, SDN3, and SD-WAN4 already. But let me focus on the data center (and in turn, cloud), which has been my work domain for a while.

I believe that the enablers of this shift will be multi-fold. For example network virtualization5 and network functions virtualization2 are precursors, and they have already been here for a while. Also, network equipment supporting alternative NOSs have been here for a while. For example, some brand vendors like Mellanox and Dell EMC—as well as whitebox switch vendors—support multiple OSs such as generic Linux, Cumulus, Open Network Linux, SONiC, along their own OSs on their switches.

I believe the next wave of convergence will incorporate networking into already converged compute and storage. For the lack of a better term—and in line with the buzzword-based naming (e.g., CI6/HCI7)—let us call it Fully-Converged Infrastructure (FCI). If and when this becomes a reality, it is likely that it will co-exist with existing types of infrastructure—similar to how HCI/CI still remain somewhat of niche markets. However, if this comes to pass with significant enough adoption, it will be a salient indicator that the balance has been tipped from hardware to software.

In any case, I believe there will be a significant shift in direction from specialized hardware-based switch/network appliances to software-based networking on generic computing platforms.

I am not sure how the gap will be bridged exactly. Perhaps, x86 server-based networking will become more powerful to catch up to ASIC/FPGA-accelerated networking. Perhaps, network appliance hardware will become generic open platforms. Perhaps, ASICs and FPGAs will become increasingly generic till they become commodity components in generic computing. Perhaps, it will be somewhere in between.

We already see some precursors in the evolution of the hardware aspect. For example, x86-based switches are now commonplace, while some ASICs are becoming more and more in the territory of generic platforms8.

Software in the networking industry on the other hand, had been a little more nimble. While proprietary software stacks of traditional vendors have been evolving further, there also has been an interest to consolidate effort on the common components. For example, Linux Foundation Networking and Open Networking Foundation has received contribution from even the bigger network vendors such as Cisco, Arista, Juniper, and Mellanox. This has resulted in the development or improvement of interesting open source technologies such as DPDK, P4, VPP, SONiC, Stratum, etc. Open source-based as well as proprietary vendor OSs9 also have been available in the market.

The physical networking (e.g., fiber/copper infrastructure, optical transceivers, cables, connectors, etc.) components are likely to remain and continue to evolve in traditional / proprietary supply chains as they involve physical fabrication, manufacturing, and other logistics.

But the non-physical side of networking space (e.g., which contains processing, forwarding, routing/switching decisions, protocols, virtual endpoints, segmentation / micro-segmentation, policy frameworks, telemetry / observability, etc.) will get much more interesting. I am not saying traditional networking equipment will become obsolete in a hurry. But I believe building networks with generic computing platforms that are at least akin to x86 servers of today—perhaps augmented with FPGAs or ASICs—will at least become a mainstream option to ops teams. These networking infastructure will largely be handled by the software they are running.

With cloud computing10 becoming a mainstay—along with all the shifts in thinking it brings—networking is due for a re-imagining. While the body of domain knowledge, standards, abstractions, and even expertise can be reused, I believe that the traditional thinking such as the ones that involve believing in different switch boxes for different places11 in the network is—IMHO—antiquated and only serve to drive network vendor sales.

Application-aware Layer 4 - Layer 7 software systems are already starting to do pretty-cool things12. However, Layer 2 - Layer 3 networking that enables them—as well can further empower them—are yet to catch up to the future that is already dawning.

And, I believe it is going to be awesome to build.


  1. e.g., Ceph, MinIO, etc. 

  2. Network Functions Virtualization (e.g., OPNFV) 

  3. Software-Defined Networking 

  4. SD-WAN 

  5. Network Virtualization (e.g., VMware NSX) 

  6. Converged Infrastructure 

  7. Hyperconverged Infrastructure 

  8. Competing vendors like Arista and Cisco use the same Broadcom ASIC silicon series (e.g., Tomahawk, Trident, Jericho) in some of their comparable switches. 

  9. e.g., Cumulus Networks, Big Switch Networks, and Pluribus Networks 

  10. Cloud Computing and its inner aspects such as public/hybrid clouds, Cloud Native Infrastructure, etc. has become a mainstay in the industry, and continue to influence the thinking in adjacent areas of technology. 

  11. Either talking in terms of older core/aggregation/access switch classification or the newer spine/leaf classification, is still hinged on the fact that these switches are usually built for different purposes with different limitations and capabilities. However, if everything is software, running on sufficient compute power, and necessary physical connectivity, there will not be much technical justification why one of your switches cannot be a core switch while another is orders of magnitude more expensive than the previous one.  

  12. Layer 4 - 7 software options include cloud-native software such as Cilium, Service Mesh software (e.g., Envoy/Istio, and Linkerd), etc. 

 
2
Kudos
 
2
Kudos

Now read this

What is the meaning of that poem?

I am a poet, and that much I can say without the uncertainty I allude in the current bio section of this blog which reads “SysAdmin, Programming Language Tourist, and Petty Dabbler of the Written Word”, or my first post here. My work as... Continue →