Industry Spotlight: Volta Networks’ Hugh Kelly

June 8th, 2020 by · Leave a Comment

With broad adoption of new technologies like 5G, AI, and IoT looming on an ever-closer horizon, network operators are working hard on solving the problem of scaling the edge and doing it economically.  With that problem has come opportunity, and new companies have been rising to the challenge.  With us today is Hugh Kelly, VP of Marketing at Volta Networks.  Volta has developed technology to move a swath of the processing done by big box routers today into the cloud.

TR: What are the origins of Volta Networks?  How did it get started, and what problem are you looking to solve?

HK: Volta Networks was founded in 2015 by our CEO, Dean Bogdanovich. Dean has been in the routing business a long time, and before founding Volta, he was a distinguished engineer at Juniper. He saw some opportunities to enhance the implementation of routing, particularly as the industry started moving towards open networking. The big insight was that the data plane can be addressed by a very cost-effective white box, but that white box doesn’t have a lot of CPU.  Carrier-grade routing is a very CPU and memory intensive activity, because these are complex networks. He realized that you don’t literally have to have all the routing protocols running on that box. The best place to be able to scale processing and memory cheaply is in the cloud, and if you architected the solution correctly it would work fine. So, Volta created a product that we call the Volta Elastic Virtual Routing Engine, a cloud-based control plane that basically allows us to create as many different routing engines as the customer needs.  They are all centralized in a cloud, which can be any public, private, or even a hybrid cloud environment. That then allows us to run multiple separate routers on a single white box, which is a unique capability of Volta.

TR: How does that differ from other nextgen routing technologies, and why did you go down that path? 

HK: Most network operating systems are disaggregated in the sense that you can mix hardware and software from any number of different vendors, but that’s still pursuing the appliance model.  Everything has to be administered on a single box, and they really don’t have the horsepower on those boxes to be able to run an OS, a hypervisor and multiple routing stacks because it gets too expensive. We have found that two trends work in our favor. First of all, some of the chipsets from the ASIC manufacturers like Broadcom became very well adapted to the telco environment. And secondly, IP and Ethernet are going to be critically important to transport networks driven by the migration to 5G.  For any given point in the network one needs a low-cost router that’s going to be able to support multiple virtual routers.  With lots of different services running on there, one needs to be able to do some segmentation. We were very fortunate that such a greenfield opportunity came around at a time like this, and we are one of a small handful of routing software vendors that focus predominantly on the service providers. We are specialists in helping the big Telcos slide down the cost of the network by expanding virtualization in order to address all these new opportunities.

TR: How much of your solution runs in the cloud, and how does it manage its interaction with the white box?

HK: The vast majority of the control plane runs in the cloud. We do have an agent that runs on the box that takes care of the communications between the cloud and the ASIC. Then there are some things that have to be done locally in hardware, particularly for failover, because it has to be done really fast. But building a routing table is not really anywhere near as time-sensitive, because we’re really just sending updates back and forth, differences between the prior state and the current state of the routing information base. For that, data latency is not a particularly big issue. We’ve actually run tests, and in the most extreme example we were running the control plane in an AWS location in Germany for a customer doing a lab test with a switch in South Africa and everything worked fine. But most of our service provider customers will probably implement this on a private cloud that is embedded within their operations. It could be run on AWS or Azure, but it’s pretty typical of service providers to have the resources to do it internally.  Another big advantage we have is we can spread the cloud elements over multiple data centers for redundancy, reliability, scalability, all the things that customers really like about carrier-grade operations.

TR: At what stage of development is this technology?   

HK: By and large, the product is generally available everywhere and we’ve been in field trials. They have been slowed a little bit by COVID-19, but we’re still seeing things moving forward. However, because we’ve chosen to work with service providers, it’s a very lengthy process to get qualified through lab tests. We will be working on things for a long time because we keep getting new feedback from customers.  Also, there are some evolutions in routing to work on. For example, for 5G applications with front-haul, back-haul, and small cells, some of the traditional elements in cell sites are going to be virtualized and run at a remote location and network timing becomes really critical. There is a standard called IEEE1588, or precision timing protocol, and we need to add support for that. One project we’re working on right now is finalizing segment routing, which is how traffic engineering for these networks will be done. If you have lots of different traffic, like edge computing, and latency-sensitive applications like automation, autonomous cars and telemedicine, how is that edge traffic managed? How is QoS implemented to ensure service level agreements are met?

TR: How has this unprecedented spring pandemic environment affected your own operations?

HK: We were a little lucky because we are already a distributed organization. We have our major development centers in Barcelona, but about half of our engineering team works remotely. We have a couple of other locations in Europe, as well as in the United States. Like a lot of other companies, we’re going to find the best people we can, and we care less about where they are. Because of that, all of our labs were set up to be accessed and managed remotely. In fact, our US lab is in a data center, not even in our office. So, when people were working from home, it was really no different. We can go for weeks without anybody having to touch any of the equipment, so we were able to keep to our development schedules. A second piece of good luck for us is we’re a software product designed to be administered remotely. So, a customer can download it into their lab and start doing their tests remotely, because they’re logging into a cloud-like environment that is also designed to be run remotely. In fact, we have done some lab tests with customers even during the lockdowns in the different countries where we’re doing business. We’re still scheduling more tests, although it’s clearly slowed down a little bit.

TR: How do you think the current environment is affecting your customers’ plans?

HK: Every one of our customers has gone through a major shift in how bandwidth is being used in their networks. They have all been able to respond to that, but they’ve used up a lot of their capacity in that process. They also know that they’re going to need the footprint that they were planning on for 5G to be able to deliver more bandwidth and with the kind of virtualization that footprint enables. So, I think that’s going to keep driving customers forward, and that’s the feedback we’re getting from the large service providers that we’re working with right now. 

TR: From the perspective of the overall industry, how do you think the infrastructure has held up so far in the face of shifting demands?

HK: I think the service providers have done a great job. A lot of them, like AT&T, have been very public about how they wouldn’t have been able to be as responsive if they hadn’t done so much virtualization.  It has reinforced the fact that virtualization is a huge advantage, not just because it can lower costs but because it really improves service agility too. And improved service agility means you’re being able to meet your customer’s needs much faster, and that’s how you drive the revenue for your business. A service provider recently commented in the press that they’re seeing a big spike in video conferencing traffic and gaming, which makes perfect sense. People are stuck at home, so that’s what they’re using, but as things loosen up, I think some of these changes are here to stay. We are already seeing organizations saying they plan to let people work from home as a regular part of their business. They’re finding that the tools are there to be able to do that. If anything, this whole situation has underscored how dependent we are on these networks in order to be able to continue to operate in this environment.

TR: You have also been involved with the Telecom Infra Project, how does that tie into things?

HK: TIP has defined a whole series of different working groups around everything from OpenRAN to how transport networks are supposed to be architected. What they’re trying to do is drive a set of common approaches that everyone can draw from. Clearly, with 5G it’s not one size fits all.  What’s going to be really exciting in a dense urban core may not work in a suburban or rural area, and that’s okay. 5G was designed to have a lot of flexibility in how you do it. They’re trying to build an ecosystem that will help manage all of those different options. On the routing side they have two major projects: disaggregated cell-site gateways and disaggregated open routers (DOR), which will be more similar to the core router, with bigger, faster interfaces leveraging different ASICs that can power higher-speed ports. They’ve given a lot of thought to how integrate all of this, and the testing process has been very illuminating. They’ve gone through a series of joint RFIs, shortlisted vendors and now they’re doing all the interoperability testing to provide a shortlist of solutions.  On the hardware side, our software runs on Edgecore, Delta, and Alpha, which are the three vendors that they have selected for the DCSG hardware. One of the things I find really refreshing is that the service providers are telling us very clearly what they want. Their expectations are high, and the testing process has been very rigorous. As with any testing process we had some hiccups along the way, and we’ve fixed things. And after going through this process we will be able to deliver a much better product to our customers. We were a little disappointed when the TIP Summit got delayed from November into next year, although that’s not unusual with face-to face-events these days.  But it’s not slowing down the working groups at all.

TR: Thank you for talking with Telecom Ramblings!

If you haven't already, please take our Reader Survey! Just 3 questions to help us better understand who is reading Telecom Ramblings so we can serve you better!

Categories: Industry Spotlight · Software · Telecom Equipment

Discuss this Post


Leave a Comment

You may Log In to post a comment, or fill in the form to post anonymously.





  • Ramblings’ Jobs

    Post a Job - Just $99/30days
  • Event Calendar