Faster Than the Data Curve: Accelerating the Network

April 10th, 2015 by · Leave a Comment

This Industry Viewpoint was authored by Dan Joe Barry, VP Positioning and Chief Evangelist, Napatech

The shared sentiment among many recent IT predictions is a recognition of the astronomical quantities of data being produced by cloud, mobile, Big Data and social technologies – the “third platform,” next-generation IT software foundation defined by IDC. As analysts monitor the ever-changing IT landscape, it is clear that regardless of the platform, or the means of delivery, the volume, variety and velocity of data in networks is continuing at explosive rates.

Clearly, platforms and tools that accelerate access to data are needed in today’s infrastructure. As network engineers work to deliver these massive data streams in real time, performance and application monitoring is turning into a pressure cooker, with multiple usage crises dragging down network performance at any given time.

In order to enable a more robust network in today’s fast-paced environments and stay ahead of the ever-expanding data growth curve, network management and security appliances will need to remain in front of advancing network speeds to ensure apps run quickly, videos stream smoothly and end-user data is secure and accessible.

New Ideas in Appliance Design

The next-generation IT software foundation of the third platform requires software acceleration and support. To address this need, hardware acceleration must be used to both abstract/de-couple hardware complexity from the software while also providing performance acceleration. De-coupling the network from the application layer helps to realize this focus, while at the same time opening appliances up to opportunities that support new functions that are not normally associated with their original design.

High-performance network adapters enable network administrators to examine layer-one to layer-four header information at line-speed and thus identify well-known applications in hardware. By identifying what is performed in hardware and what is performed in application software, more network functions can be offloaded to hardware, thus allowing application software to focus on application intelligence while freeing up CPU cycles to allow more analysis to be performed at greater speeds.

This hardware can also be used to identify and distribute flows up to 32 server CPU cores, allowing massive parallel processing of data. All of this should be provided with low CPU usage. Appliance designers should consider features that ensure as much processing power and as many memory resources as possible and identify applications that require memory-intensive packet payload processing.

Acceleration Best Practices

To address the problem of downstream analytics in a voluminous environment, many tools have sprung up – but their ability to perform real-time analysis and alerting is limited by their performance. Solutions that are used to extract, transform and load data into downstream systems tend to increase the latency between data collection and data analysis. Moreover, the volume and variety of data being ingested makes it impossible for analysts and decision makers to locate the data they need across the various analysis platforms.

By beginning the intelligence-gathering process at the point of data entry, real-time analysis capabilities will be improved, leading to the acceleration of  “third platform” activities. Best practices include:

  • Minimize unneeded data flow: Data flow decisions can be made to direct data to downstream consumers at line-rate by inspecting data immediately upon ingress. This minimizes the unnecessary flow of data through downstream brokers and processing engines.
  • Know what data’s coming in: Real-time alerting enables administrators to know what data is entering the system in real time, before it reaches decision-making tools. This provides intelligent alerts to stakeholders, informing them of the presence of new data that is of interest for their area of responsibility.
  • Analyze data in-line: By beginning to analyze data at the very moment it is received, organizations can make use of perishable insights—that is, data whose value declines rapidly over time. Doing so ensures that an organization can begin acting on what is happening immediately.

Infrastructure for the Third Platform

It can be quite costly to buy enough specialized network applications to scale in a way that meets increasing demand. Even worse, if the market shifts toward adoption of novel network hardware, these organizations must bear the cost of updating legacy infrastructure in order to stay competitive.

What’s needed is a powerful, high-speed platform that is capable of capturing data with zero packet loss at speeds up to 100 Gbps. Appliance designers can do this by de-coupling network and application data processing and building in flexibility and scalability into the network design. The analysis stream provided by the hardware platform can support multiple applications, not just performance monitoring. Multiple applications running on multiple cores can be executed on the same physical server with software that ensures that each application can access the same data stream as it is captured.

Thus, the performance monitor becomes a universal appliance for any application requiring a reliable packet capture data stream. With this capability, it is possible to incorporate more functions in the same physical server, increasing the value of the appliance.

As the third platform grows, so do data demands. Emerging technologies are enabling telcos and carriers to manage these demands without sacrificing performance. Accelerating network management and security applications is a critical strategy that will create strong networks that keep customers happy and companies in business.

About the Author

Daniel Joseph Barry is VP Positioning and Chief Evangelist at Napatech and has over 20 years experience in the IT and Telecom industry. Prior to joining Napatech in 2009, Dan Joe was Marketing Director at TPACK, a leading supplier of transport chip solutions to the Telecom sector.  From 2001 to 2005, he was Director of Sales and Business Development at optical component vendor NKT Integration (now Ignis Photonyx) following various positions in product development, business development and product management at Ericsson. Dan Joe joined Ericsson in 1995 from a position in the R&D department of Jutland Telecom (now TDC). He has an MBA and a BSc degree in Electronic Engineering from Trinity College Dublin.

If you haven't already, please take our Reader Survey! Just 3 questions to help us better understand who is reading Telecom Ramblings so we can serve you better!

Categories: Industry Viewpoint

Discuss this Post


Leave a Comment

You may Log In to post a comment, or fill in the form to post anonymously.





  • Ramblings’ Jobs

    Post a Job - Just $99/30days
  • Event Calendar