Industry Spotlight: Apcela CEO Mark Casey

October 22nd, 2018 by · Leave a Comment

The rise of the cloud and automation in general opened new potential business models for service providers.   One such provider that has taken advantage is the hybrid IT specialist Apcela, whose roots lie in the low latency, high speed trading networks that arose a decade ago.  With us today to talk about how Apcela got where they are and where they might be going is founder and CEO Mark Casey. 

TR: What are Apcela’s origins?  How did it come to be what it is today?

MC: The company originally started out in 2005 as a spin-out of CSX Fiber Networks from the railroad business CSX, , which focused on optimizing large-scale network challenges for big carries like AT&T and Verizon. We essentially took that business, split it in half, and acquired the intellectual property and some other assets to form the nucleus of what became CFN Services.  In 2007-2008 we identified an opportunity and pivoted the company toward the optimization of networks for high performance security trading.  As you know, that business has evolved quite a bit over the last 10 years, and by 2011, we came to the conclusion that to continue our growth we were going to need to move beyond that niche. Where we landed is helping large enterprises manage application performance as they evolve toward hybrid IT, and we rebranded the company as Apcela, a mashup of application + acceleration.

TR: What opportunity did you see that drew you into the broader enterprise market?

MC: Most enterprises are moving applications out of the legacy premises-based data center and into the cloud, yet still have an architecture that’s firmly implanted in the time when everything was in the data center. Enterprises are now dealing with that transition at accelerating pace. Today, you’ve got some enterprises that are somewhere between 5% in the cloud and 95% in their legacy environment and some that are 90% in the cloud and 10% in the legacy data centers. But when you have one foot in each world, it’s hard to know exactly where your applications are and figuring that out is important for application performance.  We realized we could use our global, platform for high-performance securities trading as a foundation for an application delivery platform.

TR: How did the infrastructure that you built for the financial vertical adapt to application acceleration for the enterprise?

MC: It was a pretty natural fit. In the capital markets, low latency was critical. In the enterprise, hybrid IT world we don’t talk about nanoseconds and microseconds, but milliseconds are certainly relevant as many applications start to fall apart if they go beyond certain thresholds. A highly distributed platform designed for speed and performance is a core foundational component of what you need to support cloud and hybrid IT.  Enterprise data centers are finite and fixed, but the cloud is highly distributed.  You need to be able to optimize interconnections with all of those things and that starts with the distributed footprint.

Our network had been very insular, connecting liquidity centers, trading platforms, etc. So over the last six or seven years we’ve been busy interconnecting that network into the cloud.  We had built our footprint on top of carrier neutral commercial data centers, which enables us to put various components of the application delivery stack into that distributed environment.

TR: What other changes did you need to make?

MC: We have also been building in a lot of higher level functionality, such as security capabilities like threat prevention, threat detection, data loss prevention, secure web gateways, etc. One of things that enterprises struggle with as they move toward hybrid IT is that they have traditionally built their security environments inside their data centers.  Users’ connections would come back to the data center and then go out to the public internet. The applications weren’t all on the public internet, they were in the data center. The internet was an information source to them, and it wasn’t an application environment.  Today, that’s evolved.  This forces users in a branch office in New York to go through that enterprise data center in Chicago to clear security before going out to a cloud-based application in Ashburn – this adds a lot of latency. Having a distributed environment allows us to distribute that security away from that enterprise data center, while keeping it within the control of the enterprise. That improves application performance.  On top of that, we also added higher level application delivery services that help an application get from source to user with appropriate performance, such as load balancing, application delivery controllers, session border controllers, application performance monitoring, WAN optimization.

TR: What does your infrastructure look like today, and how do you balance physical assets with software and services?

MC: Today we operate over 70 cloud hubs globally, reaching an extended environment that touches 185 markets in 41 countries. Within those 70 cloud hubs are what we call AppHUBs™. The AppHUBs are interconnected with those 185 distributed markets using private networks and to the internet directly as well. The idea, of course, is to extract traffic from the public internet, transmit it over a private network, and deliver it – in a lot of cases, directly to the application. loT of software-as-a-service applications do not have the option of a “direct connect” like an Amazon or an Azure. There, you want to be able to peer very closely to the cloud datacenter where that application is housed, so that you’re not relying on the public internet for your transit. The internet is still an important component, but the core is a very low-latency private network. On top of that, in the core, we have hardware-based switching and routing.  As we get closer to the edge, we can run that switching and routing on our own merchant silicon or on abstracted merchant silicon from a market IaaS platform.

TR: How has your network expanded in terms of breadth and depth?

MC: It was a massive expansion process. We started out in a niche where we only had to be in 35 markets globally — in North America as virtually all trading happens in New York, Chicago, and Toronto for instance.  To serve the enterprise market, you have to be everywhere else. So, we added 150 markets and we’re adding new markets every month.  Globally, we have a super strong footprint in North America and Europe. In Asia, we are in a lot of the key markets because they tend to be so coastal.  But we’re in a single market in China and two markets in India, so there’s room for expansion there.  We have a handful of markets each in South/Central America and Africa and there are a bunch more to get to there as well.  We’ve got very good coverage today, but we’ll continue to grow that coverage.

TR: Where does SDN and SD-WAN fit into the picture for you?

MC: Software-defined everything is where we want to go.  We’re taking the entire physical hardware stack of appliances that were required to deliver applications (switching, routing, security, application delivery control, WAN optimization and running them in a virtualized or software-defined environment). SD-WAN as a technology becomes a powerful component for the enterprise edge because it allows us to build very sophisticated on-ramps and essentially allows us to enable the enterprise with a programmable network infrastructure. This is the first time you’ve got the entire  network switching, and routing architecture, accessible via a centralized, single, two-way API.  We’re able to extract advanced levels of data out of the platform and into an independent analytics and visualization platform.  There, we can bring other sets of data from sources such as the underlay network and the security layers to get a cohesive view of the data across the application delivery environment. That API is also programmable such that we can send instructions back into it.  For example, we can take data, analyze that data, make decisions, write algorithms to deliver automation, and then send those automation commands back through the API to make changes in the network and application delivery environment in real-time.

TR: How much of the full power of automation is really being used right now by enterprises and how do we speed up the adoption process?

MC: For things like intent-based networking, we are still in the very early innings. Today, that sort of closed loop environment is designed to help accelerate things like remediation, but really isn’t achievable without SD-WAN. When you look at enterprises, and the processes they execute around trouble isolation and remediation – it takes people power to execute that. And typically for any given problem, 80% of the time spent on that problem is identification, with the remaining 20% on remediation. By using an analytics platform, we’re able to consolidate or accelerate trouble isolation. We’re able to identify the root of the problem much faster. As we identify more and more problems, we create signatures in the data.  Ultimately, we can program an algorithm to look for those signatures and actually programmatically execute the remediation.  But most enterprises aren’t ready to go full stream ahead on automation, so today it usually means sending that remediation execution step to an individual to review and execute.

However, there’s also the fact that the signatures haven’t been built.  You have to build the system that extracts all the data, and then you’ve got to analyze that data and the first few times you have to manually correlate that data to create the signature. Over time, you can use machine learning to build those signatures, but we’re not fully there yet.  It’s just a life-cycle thing.  The tools are largely there, it’s just that all the programming work hasn’t been done yet.

TR: What enterprise verticals are you finding the most success with?

MC: We’ve found a lot of success in areas like biopharma, healthcare, manufacturing and engineering. These markets are natural fits.  However, we think more broadly than that what we see is it’s more about where the enterprise is in their digital transformation cycle. I mentioned earlier that some enterprises are 5% in the cloud and 95% in the legacy data center. Those probably aren’t the best fit. We’re not doing lots of business with these types of companies. The kind of companies that we’re doing business with are the ones that are maybe 80% in the legacy data center, 20% in the cloud and want to be 50/50; or in some cases, some of clients are over 50% at 50/50 and driving towards 80/20 (80% in the cloud and 20% in the legacy environment). You have a lot of enterprises saying (and some of these are 100-plus-year-old enterprises), “Well, if we were starting out today, we’d be cloud native.” Right, so that’s the kind of enterprise that we can really help. A company that definitely wants to transform.  However, the issues mentioned earlier are the kind of issues that get in their way and so we’re able to help them knock down those challenges enabling them to aggressively transform their businesses.

TR: What are the biggest challenges you face in helping an enterprise in this way?

MC: There is a lot of education, because SD-WAN is still an early technology even though it feels kind of like it is in a mid-life cycle already. Two years ago, we were having to convince people of the merits of SD-WAN. Now, we don’t have to do that as much, but the architectural shift is still uncomfortable for large enterprises.  Enterprises are still running single or dual MPLS networks to their actual locations and feel comfortable with their SLAs.  When you move to things like broadband circuits, you get a best effort guarantee and enterprises broadly are not fully ready for that.  In practice, two diverse commercial broadband circuits will deliver a significantly better performance than a single MPLS link, and is arguably, pretty close to the performance of dual MPLS circuits. Because they haven’t had experience with these yet, they’re not ready to make this change as there’s a lot of risk averseness.  So, there’s still a lot of education going on.  Fortunately, we have lots of data from other customers that are running our platform that we can point to.

TR: How do you go about educating enterprises so that they can better make the transition?

MC: There are two different continuums. The first, is whether they are interested and are ready for this change. Increasingly, the answer is yes. In large organizations, there are always a few that tend to be averse to change because there is push-pull at the personnel level.  But in any large enterprise with a lot of history, you’re also going to find folks that have been around for a while and have been through the transitions from private line to frame relay and ATM, and from frame relay and ATM to MPLS. They’re prepared for SD-WAN as the next thing because they’ve already seen such changes and saw the improvements they delivery. At this point, I think we’re well up the continuum of falling resistance against SD-WAN, the technology.  But the second part of it is teaching them how to actually use it. Some are looking to just utilize SD-WAN as sort of a next generation edge routing replacement, while others want to get more out of it.  Where you are on that continuum determines how much help you need.  Some enterprise IT organizations are very far up that curve, but en masse they are not.

TR: Thank you for talking with Telecom Ramblings!

If you haven't already, please take our Reader Survey! Just 3 questions to help us better understand who is reading Telecom Ramblings so we can serve you better!

Categories: Industry Spotlight · Low Latency · Managed Services

Discuss this Post


Leave a Comment

You may Log In to post a comment, or fill in the form to post anonymously.





  • Ramblings’ Jobs

    Post a Job - Just $99/30days
  • Event Calendar