Viewpoint

Over the years, telecom operators have faced increasingly complex issues with their networks, given market growth and multifaceted needs.

While we have seen some progress and changes, it is time to take a further step forward to address the needs of the market and the struggles operators have been facing. Early on, operators have adopted control and user plane separation (CUPS), to address the distinct characteristics and requirements of each plane. This became particularly significant with network disaggregation. As operators focus more on their service offerings, they are either offloading to third parties any part of their networks and operations that is generic enough (such as datacenters and towers), or moving network functions to the public cloud. 

 
However, while data centers and wireless towers are, at the end of the day, sell-and-lease-back real-estate deals, network functions hold significant meaning to the telecom industry, which demand that operators refresh their CUPS understanding and go back to the basics of their network planning. And this brings us back to the fundamental difference between the control plane and the data plane.
 
While the control plane does not carry a significant amount of traffic, it does require a substantial amount of compute resources. This becomes even more critical as the number of network connections grow (driven by IoT services, such as mMTC and Industry 4.0). The data plane, on the other hand, requires lower compute resources (as CUPS leaves it with very little "brains") but a huge amount of networking resources.
 
This need for more networking resources leads to further obstacles in the fields. Moving network functions into a public, private, or hybrid cloud makes sense when it comes to the control plane, as the cloud provides just that – compute (and storage) resources. When it comes to the data plane, though, running such functions over a compute-centric infrastructure (which could be a virtualized network function (VNF) over a server or a cloud-native network function (CNF) over the cloud) simply does not scale. 
 
While VNFs eliminates the limitations of monolithic chassis by implementing the network function over a standard, scalable, unified hardware infrastructure, it lacks the ability of easily deploying and scaling network functions across multiple sites. Adding a networking site requires significant investments in IT infrastructures, which blocks VNFs from being utilized and scaled globally. 
 
As the next step in the evolution process, CNFs allows for an elastic implementation for network functions, as they can run in containers over any cloud infrastructure, increasing service scale and flexibility. It does, however, also introduces great inefficiencies and high costs. Such server-based cloud architecture utilizes central processing units (CPU) and graphic processing units (GPU), which are optimized for wide-ranging applications, but not for networking functions, leading to low performance and service quality. Networking-optimized networking processing units (NPUs) can address these network function needs; however, the platforms supporting them – mainly routers – are proprietary and closed, with no APIs (application programming interface) or abstraction layers needed to efficiently run network function microservices as containers, at scale. 
 
The most cost-effective, elastic and scalable solution to these issues is a software-based, cloud-native architecture which incorporates both CPUs and NPUs to address the networking-specific requirements of network functions. By implementing these network functions over the networking-optimized cloud allows operators to efficiently implement functions that require intense networking resources over a lean infrastructure of CPUs and NPUs. This is achieved by simply sharing resources across different instances of the hardware infrastructure, optimization utilization of all available resources. DriveNets, a leader in cloud-native networking software and network disaggregation solutions in the market, has introduced its Network Cloud solution to address these issues. 
 
DriveNets’ network cloud-network function (NCNF) solution adapts the architectural model of cloud to telco-grade networking. Network Cloud is a cloud-native software that runs over a shared physical infrastructure of standard white-boxes, radically simplifying the network’s operations, and offering telco-scale performance and elasticity at a much lower cost. 
 
As presented in the diagram below, different network services need different resource requirements:  
 
 
Resource sharing allows full utilization of infrastructure resources and enhances service quality, by using optimized NPUs for networking functions. It also enables the tight integration of additional network functions with the networking infrastructure. Such integration also leads to the dynamic fine-tuning and optimization of network functions based on a wide array of traffic and performance metric collected in real time.
 
When it comes to time and latency-sensitive applications, an NCNF architecture becomes increasingly essential, particularly with the introduction of new use cases and the ongoing development of remote operations applications. Networking resources became increasingly relevant, particularly at the network edge, making the concept of ‘edge cloud’ the networking-aware successor of the edge computing function.
 
Operators have recognized the need to take a step further to advance their networks and build them like cloud, to leverage cost, flexibility and innovation benefits, based on an ecosystem that supports the vision of disaggregated core and edge networks. 
 
The evolution of network functions is here to stay.
 

 

Share