This article was contributed by Lior Mishan of Ethernity Networks
Service providers have been under increasing pressure for several years to meet the growing expectations and demands of users. Now the advent of 5G networking is only adding to those expectations.
Subscribers want more services, and they want them faster, and providers are doing all they can to keep up. What consumer or business doesn’t want super-high-resolution video, multi-player online gaming, and augmented reality and virtual reality applications? Add to that low-latency connectivity to support and enable IoT, artificial intelligence, and machine learning applications.
All these trends point to one inescapable conclusion: traditional concepts of access will not be able to accommodate our network connectivity needs. High speeds alone cannot address those needs. That is why providers are increasingly recognizing the need to upgrade to more intelligent networks that can meet user needs for advanced services, but with lower operational costs.
At the core of this emerging new network is a dynamic and scalable approach to network provisioning, especially where applications demand high bandwidth and low latency. The transition from traditional hardware-centric networks to software-defined architectures requires highly scalable resources, deployable and adjustable on demand, and also easily manageable.
For some time, a cloud-based edge computing approach has been the star that has guided the industry toward the future of networking. Now, at last, providers are really getting focused on a new architecture, one that is focused on agility and scalability in distributing compute resources to improve service quality and security.
For many good reasons, the focus of the transition is the central office. That may be the traditional wireline CO, or from a 5G-centric viewpoint the remote edge (BBU pools’ locations), or in the cable industry, the headend or hub. As sensible as it may be for a focal point, there are limitations to this approach. When all traffic is aggregated to the CO, for instance, it can quickly create a bottleneck that negatively impacts throughput and latency.
However, if the focus is on a virtualized and distributed network at the edge, it can take advantage of both the economies of a data center and the agility of a software-defined network, while still keeping the central office at the heart of the architecture. Efforts such as the ETSI MEC (Multi-Access Edge Computing) initiative, OpenNFV, and the CORD (Central Office Re-Architected as a Datacenter) project have already begun that process of applying cloud design principles to the CO.
Accomplishing this transition requires a migration from proprietary edge hardware. For instance, instead of edge routers, commercial off-the-shelf (COTS) server arrays can be deployed in the CO. This approach enables software to be ported onto any server, creating an open environment with edge computing and programmability, and eliminating proprietary systems and vendor lock-in.
Thanks to their proximity to end users, virtual COs provide a huge latency advantage when compared with centralized data centers. By placing COTS servers – offering general purpose compute resources and the ability to run any function – at the network edge gives providers the agility to adopt new low-latency services by moving computing closer to the place where it is demanded.
Providers already have tremendous physical assets at the network edge, and virtual COs leverage those to help them compete with over-the-top (OTT) companies. It becomes less challenging to introduce value-added services such as streaming video and augmented and virtual reality when virtualization and cloud design are applied to the CO.
One example is content distribution network (CDN) video streaming. With this new network approach, a geographically dispersed network of bare-metal servers can temporarily cache and deliver live, on-demand, high-quality videos to web-connected devices based on a user’s location.
Although CO virtualization does address the key network edge requirements of disaggregating traffic, enabling new low-latency services, and overcoming vendor lock-in, there are challenges.
At this point, the disaggregated network is a work in progress, despite the fact that most providers are moving toward CO virtualization. There are also issues of space and power. Remote locations such as COs are not always well suited to housing the arrays of servers needed for a software-based virtualized solution, or for supplying sufficient power for these platforms.
One way to address this is to use programmable hardware acceleration to enable the virtualized solution to run more efficiently and flexibly, while providing additional functionality and significant savings on traditional operating expenses.
Hardware acceleration can be accomplished with Field-Programmable Gate Arrays (FPGAs) mounted on a network interface card (creating programmable SmartNICs), which deliver a software-defined solution in a compact silicon chip. This approach involves offloading the virtual network function data layer from the CPU, with network and security functions ported onto the FPGA. As a result, fewer CPUs are needed, and they are reserved for the computing functions and user applications for which they are best suited.
FPGA acceleration can be paired with DPDK (data plane development kit) for standard acceleration APIs and simplified integration. The benefits multiply when several VNFs are incorporated onto a single FPGA SmartNIC within a single server.
Not only does this yield savings in server cores, physical space, and power, it delivers superior scalability even at high bandwidths, and can provide enhanced security by using the FPGA to bypass the CPU entirely for encryption and decryption tasks.
As user needs and provider preferences continue to shape the next-generation CO, there is no question that providers will rely heavily on edge virtualization and disaggregation of network traffic and functions. With this approach, they can relieve bottlenecks and bring the innovative, value-added, and user-demanded services right to the network edge.
Meeting the space, power, and scalability needs of this new CO calls for an SDN solution and a programmable FPGA-based acceleration architecture. This has the potential to optimize the network edge and deliver the efficiency that providers need in a connected, 5G-centric environment. This could be their best strategy for competing with OTT providers.
Lior Mishan is Head of Marketing for Ethernity Networks, www.ethernitynet.com, provider of programmable hardware-based acceleration for networking and security on FPGAs for virtual COs and cloud edge deployments.
If you haven't already, please take our Reader Survey! Just 3 questions to help us better understand who is reading Telecom Ramblings so we can serve you better!Categories: Industry Viewpoint · SDN · Wireless