Industry Spotlight: Peak’s Luke Norris on the Intercloud

October 30th, 2014 by · Leave a Comment

luke-norrisThere are few buzzwords less specific these days than the word cloud, but for me the intercloud manages to fog things up even more.  The internet already connects all the clouds out there right?  So what are we really saying?  I asked Luke Norris, CEO of Peak and one of the folks most involved with hooking up clouds to each other, to help me understand how the intercloud changes the game.  We last talked with Luke last summer as Peak was just beginning its national white-label cloud expansion. 

TR: What does the intercloud mean for Peak?

LN: Peak has 8 cloud nodes spanning the US and the UK today.  Because our cloud infrastructures are located in highly connected facilities, customers can easily move and migrate their production applications and workloads from their colocation and on-prem environments into our steady state highly redundant cloud infrastructure.  Through intercloud-esque technologies those same customers who ‘plug’ their workloads into Peak’s cloud are also simultaneously plugged into other cloud systems in a very transparent way at network layer 2. In other words, major cloud providers can peer – they can leverage compute, storage, and network infrastructure components from one-another.  This cloud peering means that customers can take advantage of services such as ‘dupe as a service’ with Amazon, MS-SQL and NOSQL licensing structures with MicroSoft… and actually pool those cloud resources.  It creates a nice roadmap for customers who like the steady state but want to eventually take advantage of the ‘best of’ features from other cloud providers.  And if you do it just right, and make it transparent at layer 2, then what you are able to do is really give a single infrastructure, a single pane of glass, and in some cases a single billing.

TR: Does there need to be a balance between the flexibility of tapping multiple clouds and the complexity that gets introduced?

LN: I think there absolutely is a balance.  It takes cloud complexity to an N^x model as you add more and more service provider possibilities.  From Peak’s standpoint, as an InterCloud advocate, we need to build the broadest ingredient set possible.  Peak wants customers’ steady state, data gravity, with an ability to burst and mix in other cloud service providers’ value added services a top our infrastructure, locations, and Layer 2 cross connections as needed.  In turn, our reseller partners will be able to sell a broader menu of services and solutions to their customers.  We  make InterCloud a transparent capability for enterprises.  Is every Peak partner going to take advantage of every cloud service?  No, not by any stretch.  But by being the largest ingredient set in the market, Peak can give its partners the largest range of menu sets.

TR: How big a difference is there between the various cloud options out there?

LN: It’s a massive misconception in the media that all cloud service providers are the same.  They have different SLAs, different CPU and network throughputs, different hypervisors and virtualization technologies, and different overall logical services attached to the physical mediums.  Each service provider has taken a different approach to them.  At Amazon and Microsoft there is no NFS service to attach to your storage.  At Amazon you can do neat economics with the RedShift big data service.  Microsoft has unique services based around their SQL licensing and SQL-as-a-service.  Many enterprises end up have multiple stacks and still need all their services to be connected by NFS.  By mixing and matching cloud providers you can take advantage not just of the best of breed stacks and services within those stacks, but also best of breed economics for each application.

TR: Are the large cloud providers like Amazon and Microsoft actively encouraging this sort of hybrid offering?  Who is driving this movement?

LN: No, they are not actively seeking this.  To peer with other cloud providers means they  don’t get the full attention and gravity of their customers, so it is not ideal.  I don’t know many forward thinking cloud companies that are doing what we’re doing.  The data centers are doing a great job of creating an ecosystem and a solution, so as the cloud providers look to expand and stretch out the data centers will be the focal point.  I would say the data centers focus on becoming more carrier neutral and lower latency and connecting cloud providers together is what is allowing cloud providers to take advantage of it.

TR: Do you think the rise of the intercloud will help smaller cloud providers make inroads into the dominance of the big cloud providers?  Is it already happening?

LN: Absolutely, and I do believe that is starting to actively show up.  We are seeing hybrid quotes in competition where cloud providers are mixing and matching their own service with Amazon and Google and Microsoft to bring best of breed solutions to market.

TR: What are the key concepts to understand when building in an intercloud world?

LN: You really have to work with each of the cloud providers directly, and you have to be located in these highly cross-connected facilities, where through one hub there are 3-4 cloud providers attached to our infrastructure. These are critical things for us, which we have even put patents out on.  Anyone can get a circuit to Amazon or Microsoft or us, but to get a circuit that is extremely low latency, allows for multiple customers, is managed through APIs with logical separation of each customer, and offers economies of scale of multi-10G circuits, really is a unique service.

TR: How important is the physical location of an enterprise’s cloud data?

LN: The reason we have eight centers across the country is that we think latency is incredibly important for enterprise applications.  The enterprise experience typically requires less than 8-10ms of latency to be really responsive and snappy.  So because of that we think it’s important to have your infrastructure within a particular geo, say within 300 miles of the enterprise itself.  That also allows for very high speed robust circuits to be connected and utilized.  With inter cloud, you can extend your environment to say 500 miles, and have a full-on disaster recovery service solution.

TR: How much of the US enterprise market can Peak reach currently within that latency window?

LN: With our partnership with Telx we think we have about 90% of America covered within that 8-10ms range, and we continue to add more locations to drive both latency down and the total population we can address up. We need some additional services in the South, the Midwest, and the mid-Atlantic.  There are active negotiations to expand there in 2015.

TR: Are you planning to expand into those regions organically or inorganically?

LN: Both.  We’re very excited by acquisition opportunities, and also we’re closing additional rounds of venture capital to organically grow.  We continue to expand rapidly.

TR: What stage is the enterprise world at in really accepting solutions based on the intercloud?

LN: There are a few enterprises fully taking advantages of these service solutions.  An enterprise should not move to a cloud that could be limited in its technology and capabilities and unable to take advantage of all the services and capabilities that are out there.  The InterCloud story resonates very well, however enterprises are generally in the very early stages of that initial cloud adoption/migration phase.  Once cloud becomesmainstream, InterCloud will be the next evolution.  The early adopters who have already migrated have their feet securely planted are the ones willing to move new directions now.

TR: Do you think a phase of consolidation is nearing for cloud services?

LN:  I think the great cloud shakeout or roll-up is definitely going to happen.  I think what I’m seeing as required to do this is a fundamentally different engineering/infrastructure company than it was even a year ago.  Now you’re talking about APIs and software-defined.  It’s a very technical and high skill set, and overlaying that over a robust distributed infrastructure is actually very expensive.  It’s moving so fast that there hasn’t been a natural progression for the workforces to have gained the skill sets. It takes a new breed of people and services, and a) there simply aren’t enough of them, and b) the cost structure be in the game is much higher.  And that’s going to weed out the smaller ones that can’t make the necessary investments.

TR: Thank you for talking with Telecom Ramblings!

If you haven't already, please take our Reader Survey! Just 3 questions to help us better understand who is reading Telecom Ramblings so we can serve you better!

Categories: Cloud Computing · Industry Spotlight

Discuss this Post


Leave a Comment

You may Log In to post a comment, or fill in the form to post anonymously.





  • Ramblings’ Jobs

    Post a Job - Just $99/30days
  • Event Calendar