Distributed core or edge computing? Is one better than the other?

December 15th, 2017 by · 1 Comment

This Industry Viewpoint was authored by Mattias Fridstrom, Telia Carrier

Edge computing is a new spin on an old concept—to put computing resources at the edge of the network, so services are delivered faster to the end-user. But the concept of distributed computing runs head-on into the need for scale and centralized management of resources, especially when applications for cloud services delivery, IoT, big data analytics, and artificial intelligence (AI) encompass the new paradigm for computing. Even as hardware becomes cheaper, smaller and less power-intensive, being able to quickly and efficiently reach end-users across the network will always be the highest requirement, regardless of where compute resources are located.

Since the beginning of IT time, carriers have needed to move content between content producers and end-users. Content is never produced or archived where it is consumed. The key to fast service delivery is making sure content travels over a network that reaches as many end-users as possible in the most efficient fashion.

As content has become larger, more sophisticated and life critical, end-users expect better quality. Distributing content closer to the user provides a better experience, but it comes at a cost to the content provider in the form of a more complex distribution network with servers at the edge and the overhead in locating and maintaining equipment across multiple geographic regions. Content delivery networks (CDNs) provide a distribution network as a service, but also at an additional cost.

CDNs have an important role to play but have limitations when it comes to speeding up non-cacheable cloud services or other interactive services. At first glance, edge computing might help by placing servers closer to the end-user, but there are limitations for certain applications. Any task pushed out to the edge needs to be relatively stand-alone and self-contained, so basic staples like financial transaction processing, e-commerce, and anything that requires a centralized database isn’t going to be “edge worthy.”

Internet of Things (IoT) applications would appear to be edge computing candidates at first glance, but IoT’s power comes from the collection, aggregation, and archiving of data from thousands to millions of end devices. The value in IoT comes from thedata collected over time and analyzed as a whole across the entire collection of “things.” Edge, by its nature, only has a small piece of the data puzzle in isolation while an aggregate IoT data set can be sorted and probed in numerous ways with statistical tools.

Centralization is also necessary to go beyond big data analytics and into anything having to do with AI or machine learning. Intelligent tools to work with data today require sufficient compute power in order to perform their tasks. Size matters, with one or more dedicated rows of servers in a data center beating any single box at the base of a phone tower or any rack of services sharing time with dozens of edge processes.

The arrival of 5G networks, while providing justification for edge computing, also works against it. If content providers and end-users communicate faster across the network with lowered latency, it makes more sense to keep network design and elements as simple and powerful as possible for tasks. Edge computing adds cost and complexity to the network with an upper bound as to the amount of total computing power you can put at the edge and dedicated to a task. The best end-user experience is delivered by the cleanest and fastest path to the most powerful cloud resources available to complete the needed task.

Ultimately, rising expectations on content and application services will require a combination of edge and centralized resources – they aren’t mutually exclusive. Edge computing enables next-generation latency sensitive applications to be fully realized. Augmented reality (AR) is one instance where edge proves a definitive advantage over data center location. A cell phone or other AR device such as smart glasses — Google Glass 2.0, anyone? — need to be updated as fast as possible because both the end-user and the device are moving through the real world. Localized information for overlays onto the AR user interface cannot lag as there is little to no tolerance for delays. Too much lag in providing overlays onto the real world makes an AR application unusable because people won’t tolerate excessive delays in waiting for a display to update; for instance, if it is easier and faster to walk into a building to find out what restaurant choices are available, the app will go unused.

Similarly, connected vehicles will need rapid information updates. A driver getting traffic updates from localized servers can avoid jams and bumper-to-bumper roads if they get alerts and suggested alternative routes as quickly as possible. If alerts are delivered after the driver is stuck, they are effectively useless.

While the edge allows us to circumvent many physical limitations, it is still dependent on a solid and well-connected core.

# # #

Global wholesale operators, such as Telia Carrier have built and continue to improve their network based upon the belief that providing the simplest, fastest path between content providers, cloud services and end-users will provide the best end-user experience now and in the future. Telia Carrier’s direct customers account for 53 percent of global Internet routes, and boast the #1 ranked Internet backbone that provides direct paths between content providers, service providers, and end-users alike. Telia Carrier also built the first 100 Gbps backbone in both North America and Europe and has upgraded paths to provide 200G and 400G when the market is ready.

Mattias Fridström, Chief Evangelist, Telia Carrier

There’s a certain type of person who gets a little bit too excited about networks. Mattias is that guy. If he had a tattoo, it would be of a network. His knowledge is mind-boggling and his passion irrepressible – no one makes connectivity come alive like Mattias. He offers deep insights into the networked economy. What are the challenges of tomorrow for network providers? How can we meet ever-increasing traffic demand and customer quality expectations within the same cost frame?

Mattias holds an MSc in Electrical Engineering from the University of Wollongong, Australia. Since joining Telia in 1996, he has worked in a number of senior roles within Telia Carrier and most recently as CTO. Since July 2016 he has been Telia Carrier’s Chief Evangelist. Mattias’ passion isn’t limited to networks. He has played golf professionally and competed on a national level in football and innebandy. Although he has a reputation when it comes to sports as being the “worst loser at Telia Carrier,” he is working hard to overcome this – by not losing to anyone. At anything. Ever!

If you haven't already, please take our Reader Survey! Just 3 questions to help us better understand who is reading Telecom Ramblings so we can serve you better!

Categories: Cloud Computing · Industry Viewpoint

Join the Discussion!

1 Comment, Add Yours!


  • Next Gen CDNs (Fastly and section.io) are ahead of the legacy CDN providers with their ability to cache dynamic or “uncacheable” content. e-commerce traffic (inventory levels, pricing, api’s.. etc) are static in nature, but their unpredictability in change values is what makes them uncacheable. Take a look at both companies ability to purge in real time (under 150ms) and enact change config times (2 seconds or less) and now content that was previously thought as uncacheable is no longer the case.

Leave a Comment

You may Log In to post a comment, or fill in the form to post anonymously.





  • Ramblings’ Jobs

    Post a Job - Just $99/30days
  • Event Calendar