In most people’s minds, one of the biggest hurdles enterprises struggle with when it comes to migrating the cloud is that they may have to sacrifice some security along the way. But is that really the case? With us today to talk about the realities of data security systems in today’s cloud-powered network world is Bryan Doerr, CEO of Observable Networks. Before taking the helm at Observable Networks, Bryan was CTO of Savvis for a decade.
TR: How did Observable Networks get started?
BD: Observable Networks was founded by Dr. Patrick Crowley, who is a professor at Washington University of St. Louis. He was a very prominent researcher in the area of deep packet inspection, which is the idea that at wire speed you can look inside the packet payloads of the bits on a network and inspect them for malicious data. In the course of his research, he realized that as the world moved toward more and more use of encryption in networks this whole DPI-based security apparatus that we have built as an industry wasn’t going to work. When you think about intrusion detection and prevention systems, packet payload detonators and next-generation firewalls, they all depend on some form of DPI. But properly encrypted data can’t be inspected by DPI and therefore is opaque to those systems and tools. His conclusion was that “We need a better way to identify threats in our networks.” The results of his subsequent work led to Observable Networks.
TR: What technology does Observable Networks use instead?
BD: The company has a core technology called endpoint modeling. Endpoint modeling uses the metadata available in network communications to build models of normal network behavior for every endpoint speaking on the network, whether it is simply a laptop, a server, or a phone, or a virtual asset like a VM running in a virtual private data center, or a public cloud asset like Amazon AWS, or a printer or camera. If it speaks on the network, we build a model of it based on its network traffic. In comparing actual behavior to the models we have built, we come to understand whether the device appears to be changing in a meaningful and/or potentially threatening way, and if so, we notify the end-user. If you are looking for a broad category for what we do, you could call it security behavior analytics. Sometimes people think of us as a cloud-native intrusion detection system.
TR: In what form do you take this endpoint modeling to market?
BD: We have created three commercial offers: Observable Enterprise, Observable Cloud, and Observable ICS. They are all offered out of the cloud as a software-as-a-service (SaaS). Observable Enterprise is an endpoint-count-based product, billed based on endpoints counted in the network. Observable Cloud is primarily an AWS-focused solution today, which is where significant growth for the company is occuring, which is billed based on usage. Observable ICS is like Observable Enterprise, but includes an open source intrusion detection system capability so you can have both together in your deployed environment.
TR: How do you create individual endpoint models?
BD: It’s all automatic. The analytics we have written to model the behavior of a device through time are all automatic. We actually look at behavior automatically in five areas or dimensions. One is the role of the device. Once you have a role identified, there is a suite of normal behaviors that one can associate to that role. There are then immediately recognizable abnormal behaviors that might be present. The next dimension is a group assessment. Regardless of the role, collections of devices that are similar will tend to behave similarly, and a device that wants to be part of a group, but in fact has outlier behavior immediately becomes suspect. The third dimension is consistency through time. After having watched a device for a period of about 30 days, we have a very good understanding of its normal set of behaviors and we can automatically recognize changes. The fourth dimension is the rules dimension, which is based on policy and standard-derived behaviors that are acceptable and unacceptable for a network, like contact with devices on threat intelligence lists, cross-subnet communication that should not be allowed, etc. The fifth is the forecast dimension, which has to do with very periodic and recurring behavior where you can predict any deviation from that regular behavior. All of those are done across all endpoints individually, automatically.
TR: Why do you think enterprises shouldn’t be afraid of the security issues that might arise in a cloud environment?
BD: Often what you find is a rationalization for not going to the cloud that is rooted in an objection by a CSO or CIO that the public cloud is less secure than the private data center. The question is: is that actually true? My assessment is that for many applications, and I would daresay most applications, it is actually the opposite. It is more risky to stay in your own private data center and to implement the best security you can instead of going into the public cloud where the best security solution is already available.
TR: Why would security in the public cloud be better?
BD: My defense for that position is as follows: If you look at the task of building good security into a private data center, you have to look at all that is necessary. First, there is all the equipment you have deployed – networking gear, computing gear, storage gear. Your primary focus is on getting it all working functionally, but right after that is the need to integrate the log outputs and other security functions that exist within each one of those devices so that you have the beginnings of a security framework. Oftentimes, you will find yourself pushing all of that log data into a security information management system. But you are starting from a low level of information integration and building up from that. Once you get there you aren’t finished, however. You have to then layer in all kinds of security on top of that log aggregation to actually find threats. Importantly, you have to maintain it all through time, because none of these devices are static. They change, their logs change, their behaviors change, and the tower of integration you have built has to be continually refreshed and reintegrated to continue to be valuable. Solving that has been a problem for CSOs and their teams and data center operators forever, and it hasn’t really fundamentally changed.
By contrast, if you look at a public cloud and at the starting point for security there, what you find is all of that non-value-added, data log integration and best-practice configuration work at a hardware and appliance level is done for you. The data center operator is operating at a scale, efficiency and proficiency that most individual organizations can’t touch. You have a very good chance that the configuration status of all the equipment, which is deployed in massive scale and with massive consistency is on whole more secure (at a low level) than what you were able to put together yourself. More important than that, the integration of those pieces of equipment into flows of data that can be leveraged by layered-on tools that provide additional security is so much better than your private data center that the result is an extraordinarily more comprehensive view of the security posture of your environment.
If you take Amazon AWS for example, and you look at the logs that are available just by turning them on, they represent a platform for security that you would spend months or years trying to obtain in a private data center. Additionally, third parties are accessing those log files and on your behalf building next-generation analytics around what those logs are telling you. The result is a security solution that is more comprehensive with less effort.
TR: Why do you think people still see the public cloud as less secure?
BD: Many of these beliefs are simply outdated, and they persist today because they existed and not because they are defensible. There was undoubtedly a point where many of these things were still maturing. There was a point at the origin of cloud technology where all that was available was multi-tenant servers and shared storage in somebody else’s data center running unknown hardware against practices that weren’t visible. That was the earliest version of public cloud computing, and for many people looking at that it was just untenable. So the ‘cloud is not secure’ idea was born. What is important to realize is that it was eight to nine years ago, and a lot has happened in that time. The benefits of operating infrastructure at truly massive scale weren’t fully realized until later, but they are now legitimately cause for revisiting old positions.
TR: Is there anything today that you can’t do with security in the cloud that you can in a private data center?
BD: At this point I think you can do everything in a public cloud that you can do in a private data center with the exception of controlling the deployment and configuration of physical assets. You have to be comfortable operating in a virtual environment. If the physical control of the asset is important to you for some reason, then obviously public cloud does not provide you with that opportunity. But once you transition from that to the functionality, the instrumentation available to you rivals anything you would have available in a private data center.
TR: Observable’s SaaS platform works in either scenario, which one is more popular today?
BD: In the early stages, we were focused on classic enterprise private data center, and we have a good customer base that uses us in that capacity. For most of the last 18 months, however, our growth has been in the public cloud world. In our experience, AWS is clearly the most advanced in this regard. They have provided the richest set of integrated data sources about the operation of the infrastructure and the assets you have there. As other public clouds continue to build out their instrumentation, the same kind of opportunities will be there as well. For some of our customers who deploy us in a hybrid configuration, which for us is a single pane of glass across any combination of public and private asset deployments, they find the biggest benefit. There they can make the claim of a common set of controls enabled by the modeling and analysis done by Observable that spans across the public and the private data centers.
TR: What’s the biggest challenge cloud-based security services must overcome?
BD: I think there’s an education process that’s necessary to help people to understand their role in security practice. As a security solutions vendor, what we need to do is make sure we aren’t wasting people’s time with false positives so that when we request their attention, they actually get value for it. Also, the world we are coming from is one where people thought security was easy. Many people are familiar with using an IPS, and the nice thing about them is that if you are able to see the data in the network and recognize a threat signature, you can just shut down the network flow in which that signature is present. I think that for many people, that level of simplicity and interdiction remains important. They want their security tools to both find and remediate threats. While that is a good place to be aiming, and we are releasing features further down that path to that end, pure turn-me-on-and-forget-about-it security like that still isn’t really plausible given the nature of the threats.
TR: Thank you for talking with Telecom Ramblings!
If you haven't already, please take our Reader Survey! Just 3 questions to help us better understand who is reading Telecom Ramblings so we can serve you better!Categories: Cloud Computing · Industry Spotlight · Security