This Industry Viewpoint was authored by Scott Sumner - VP, Solutions Marketing, Accedian Networks
The industry is buzzing about the promise of the Internet of Things (IoT) and predictions about its growing influence. Gartner forecasts that 4.9 billion connected ‘things’ will be in use in 2015, up 30 percent from 2014, and will reach 25 billion by 2020. That’s potentially great for consumers and industry, assuming network operators are prepared to handle the resulting influx of data traffic.
Even though each individual ‘thing’ is likely to generate a relatively small amount of data, the demand from billions of devices will add up. Also, “connectionless” IoT traffic tends to be chatty and bursty, and can create several times the impairments of “regular” mobile and internet traffic. IoT could create so much “noise” that it begins to actually debilitate the network. Those handy NEST thermostats might just kill your Netflix night by trying to reach you, spewing micro-bursts and packet loss in their path. Somehow all this new traffic has to be put in its place.
To proactively prevent IoT-related problems, operators and service providers need tools to analyze and optimize networks end-to-end on a per-flow basis. This will allow them to make sure priority traffic—stuff generated by humans, like voice and video calls, gaming, and other delay-sensitive applications—gets sufficient bandwidth, and isn’t overrun by competing IoT applications. Working smarter with sophisticated, cost-effective performance monitoring tools that are aware of what’s happening at every layer and location in the network will keep IoT from sneaking up on providers.
Monitoring and managing various services related to M2M and IoT is a major challenge for network operators. Deploying ubiquitous coverage, reporting, and visualization scalability - while driving out cost - is difficult to achieve but essential nonetheless. Assuming this is realized, the next step requires making the network “performance aware”—automating control over network usage and per-application routing in real time to make sure all services get ‘what they deserve.’
In short, we need machines to handle the performance optimization required for upholding QoE expectations—and we need a real-time feed of network performance to make this possible. This allows operators to focus on their business, reel in “top talker” bots, and make sure humans get their share of the network.
A uniform monitoring fabric can be cost-efficiently deployed by using SDN and NFV principles to create a virtualized “instrumentation layer.” This approach uses existing network elements’ built-in support for performance monitoring standards, supplemented by smart small form-factor pluggables (SFPs), monitoring modules, and agents to cover blind spots. Next, it employs a performance assurance “controller” to orchestrate test sessions, maintain inventory and control of all remote test points, and provide a real-time feed of performance to network and SDN controllers.
As consumers and businesses increase their use of M2M-connected devices and IoT services, avoiding service loss will become as important as having the connection. In response, operators must be able to maintain expected QoE, improve data security and reliability, maximize the value and usage of their infrastructure, and reduce operating costs, all in the face of exponentially increasing network traffic. No matter how you look at it, IoT is becoming a major factor in the evolution of telecom computing and network performance assurance.
An instrumentation layer that makes use of virtualization/NFV can be a very effective, and affordable, way for operators to proactively keep pace with the impact of IoT. These options exist today; putting it all together is the work ahead.
If you haven't already, please take our Reader Survey! Just 3 questions to help us better understand who is reading Telecom Ramblings so we can serve you better!Categories: Industry Viewpoint · Internet Traffic · IoT, M2M