This Industry Viewpoint was authored by Charlie Ashton, senior director of business development, Napatech
In the past year, we’ve seen significant growth in the number of private networks being deployed for enterprise applications. Commercial private deployments have advanced greatly in 2022, and all signs point to that trend continuing. In fact, Gartner estimates that by 2025, 75% of data will be processed at the edge – outside of traditional centralized data centers in the cloud. This will result in improved security, faster response time and reduced latency. This trend, however, implies that you need to host your network core either on-premises or at the edge of the service provider network.
There’s a flip side, though. These edge locations can be costly, with significant constraints in terms of energy consumption and physical footprint. That means organizations looking to truly reap the benefits of these new use cases – and accommodate the increased compute and bandwidth they require – have to grapple with these challenges. One way to address this is to leverage offload technology, which can help maximize network performance while also minimizing energy consumption and costs.
Examining how network deployments have changed
As a result of the aforementioned trends, enterprises are now installing their networks and deploying their network core in locations that are extremely constrained in terms of footprint, power consumption, space, security, and more. These aren’t the nicely air-conditioned data centers many organizations had gotten used to; they’re out in the “real world.”
What this requires is that you have to minimize the number of servers that will be put into these less-than-optimal locations, given that those servers are expensive, take up a major amount of space and consume a significant amount of energy. The best way to manage this reduction is to ensure you’re using servers that are optimized for their job; in other words, you want to let the servers run the kind of workloads they’re designed for, which are applications and services. Don’t get caught in the trap of tying up your very expensive server computer resources running workloads they’re not optimized for, such as network packet processing.
As an example, within the 5G packet core of a 5G network, the User Plane Function (UPF) represents the highest compute workload, performing critical packet inspection, routing and forwarding functions. General-purpose server CPUs aren’t well suited to the performance and latency requirements of this workload, which makes UPF an ideal function to offload from the general-purpose server CPU.
Where offload technology fits in
The role of an offload solution sounds rather simple: it provides a way to maximize your compute resource utilization and energy efficiency within edge and cloud data centers. It optimizes the allocation of servers for running applications and services rather than networking workloads. And that means your organization is able to really reap the benefits of the newly deployed private networks. This is especially important when it comes to accelerating enterprise, cloud and telecom workloads.
By implementing such a solution, you gain improvements in capital expenditures (CapEx) while also reducing operating expenses (OpEx.) You’re able to increase the number of users that can be supported from a server, while also improving efficiency of your network monitoring. Referring back to the 5G UPF example, by offloading the UPF data plane to a Smart Network Interface Card (SmartNIC), carriers and/or enterprises can support 50 times as many users per server with more than 90% reductions in per-user CapEx and OpEx compared to software-only implementations.
Getting started with offload technology
Organizations looking to benefit from offload technology within edge deployments should make sure that they estimate and understand the business-level return-on-investment of a proposed solution. That’s in addition to understanding the purely technical factors such as performance metrics and energy consumption.
They must also ensure the solutions they’re evaluating conform to relevant industry standards – whether those are hardware-related, such as PCI-Express connectivity, or software-focused like industry-standard APIs (DPDK is one example) and orchestration (i.e., Kubernetes.)
Reaping the power of private networks
2022 has seen steady growth in the deployment of private networks for enterprise and industrial applications.
However, as these deployments scale up, they’re under massive pressure to optimize return-on-investment (ROI). This implies ensuring you’re doing everything possible to maximize the efficiency and fully reap the benefits of these networks.
Offload solutions like SmartNICs are a key part of this puzzle, providing a way to capitalize on the promise and potential of private networks by minimizing the cost and power consumption of the compute infrastructure, while maximizing the number of subscribers or devices that can be supported in each server.
Charlie Ashton is senior director of business development at Napatech, a leading supplier of programmable Smart Network Interface Cards (SmartNICs). In this role, he focuses on new business and partnership opportunities in the telecom and cloud segments. He has extensive experience in the telecom industry, having worked in business development, marketing and strategy roles for a number of systems and software companies. As an avid hiker, he temporarily left the comforts of the corporate world to try his luck at living in the woods while hiking the Appalachian Trail.
If you haven't already, please take our Reader Survey! Just 3 questions to help us better understand who is reading Telecom Ramblings so we can serve you better!Categories: Industry Viewpoint · Interconnection · Managed Services