Gaveen Prabhasara

The future of networking is software

· Gaveen Prabhasara

I believe the future of networking in the data center is software-based. The water is wet, and thank you for coming to my TED talk.


While what I just mentioned may be obvious to say—given how pretty much everything has its future in software—you would be surprised at the state of the networking industry if you came to it as an outsider. The cloud has already happened, and computing has embraced it with open arms,… tentacles, and whatever else is available. Storage is slower to move but not too far behind. Cloud-scale storage technologies are becoming closer and closer to being commoditized1. Network—in a way—is the last frontier in the data center that is not yet fully subscribing to cloud ethos.

Things are certainly changing—more on that later. But networking vendors are still trying to sell you expensive switches, and they are only now getting serious about DevOps. Their idea of a software-defined network (SDN) is to run a bunch of software on top of their existing traditional network solution stack (i.e., ASIC-driven expensive boxes + mostly-black box NOS/firmware + ridiculous licensing + exorbitant professional services + consultancy that makes sure you stay with them).

I believe the future of networking—at least networking in the data center—is in software. Networking, in general, is gradually heading that way. The Telco industry has learned from what happened with the cloud and is trying to move towards NFV2, SDN3, and SD-WAN4 already. But let me focus on the data center, where the clouds also reside.

I believe that the enablers of this shift will be multi-fold. For example, network virtualization5 and network functions virtualization2 are precursors. Also, network equipment supporting alternative NOSs has been here for a while. For example, some brand vendors like Mellanox, Dell EMC, and whitebox switch vendors support multiple OSs, such as generic Linux, Cumulus, Open Network Linux, SONiC, and their own OSs on their switches.

One way things can go is the next wave of convergence, incorporating networking into already converged computing and storage. For the lack of a better term—and in line with the buzzword-based naming (e.g., CI6/HCI7)—let us call it Fully-Converged Infrastructure (FCI). I feel an inspired OEM vendor can make it happen. However, if this comes to pass with significant enough adoption, it will be a salient indicator that the balance has been tipped from hardware to software. Technically, it could still happen without a noticeable wave of further convergence. Where the balance of power shifts from specialized hardware-based switch/network appliances to software-based networking on generic computing platforms.

I am not sure how the gap will be bridged exactly. Perhaps x86 server-based networking will become more powerful to catch up to ASIC/FPGA-accelerated networking. Perhaps network appliance hardware will become generic open platforms. Perhaps ASICs and FPGAs will become increasingly generic till they become commodity components in generic computing. It could be an amalgamation of all these conditions.

We already see some precursors in the evolution of the hardware aspect. For example, x86-based switches are now commonplace, while some ASICs are becoming more and more in the territory of generic platforms8.

Software in the networking industry had been a little more agile. While proprietary software stacks of traditional vendors have been evolving further, interest in consolidating efforts on common infrastructure is happening, albeit with limited adoption. For example, Linux Foundation Networking and Open Networking Foundation have received contributions from major network vendors such as Cisco, Arista, Juniper, and Mellanox. This has resulted in the development or improvement of interesting open source technologies such as DPDK, P4, VPP, SONiC, Stratum, etc. Both open-source-based and proprietary vendor OSs9 have been available in the market.

The non-physical side of networking space (e.g., processing, forwarding, routing/switching decisions, protocols, virtual endpoints, segmentation, micro-segmentation, policy frameworks, telemetry, observability, etc.) will get much more exciting. I am not saying traditional networking equipment will become obsolete in a hurry. But I believe building networks with generic computing platforms that are at least akin to the x86 servers of today—perhaps augmented with FPGAs or ASICs—will at least become a mainstream option for ops teams at some scale. These networking infrastructures could be handled by the software they are running.

With cloud computing10 becoming a mainstay—along with all the shifts in thinking it brings—networking is due for a re-imagining. While the body of domain knowledge, standards, abstractions, and even expertise can be reused, I believe that traditional thinking that involves believing in different switch boxes for different places11 in the network may—IMHO—antiquated and only serve to drive network vendor sales. In fact, cloud consumers mostly do not care about anything below Layer 4. Therefore, the skeuomorphic constructs such as, switches and virtual switches may not have an inherent purpose in these scenarios.

Application-aware Layer 4 (and above) software systems are already starting to do pretty cool things12. However, the Layer 2 - Layer 3 networking that enables and empowers these remains to catch up to the future. It will be an exciting journey to get there.


  1. e.g., Ceph, MinIO, etc. ↩︎

  2. Network Functions Virtualization (e.g., OPNFV) ↩︎ ↩︎

  3. Software-Defined Networking ↩︎

  4. SD-WAN ↩︎

  5. Network Virtualization (e.g., VMware NSX) ↩︎

  6. Converged Infrastructure ↩︎

  7. Hyperconverged Infrastructure ↩︎

  8. Competing vendors like Arista and Cisco use the same Broadcom ASIC silicon series (e.g., Tomahawk, Trident, Jericho) in some of their comparable switches. ↩︎

  9. e.g., Cumulus Networks, Big Switch Networks, and Pluribus Networks ↩︎

  10. Cloud Computing and its inner aspects, such as public/hybrid clouds, Cloud Native Infrastructure, etc., have become a mainstay in the industry and continue to influence the thinking in adjacent areas. ↩︎

  11. Either talking of older core/aggregation/access switch classification or the newer spine/leaf classification is still hinged on the fact that these switches are usually built for different purposes with different limitations and capabilities. However, if everything is software—running on sufficient compute power and necessary physical connectivity—there will not be much technical justification for why one of your switches cannot be a core switch and an order of magnitude more expensive than another one. ↩︎

  12. Layer 4 - 7 software options include cloud-native software such as Cilium, Service Mesh software (e.g., Envoy/Istio, and Linkerd), etc. ↩︎