This Industry Viewpoint was authored by the MEF.
When the new generation of Carrier Ethernet – CE 2.0 – was launched in 2012, MEF president, Nan Chen said: “With Carrier Ethernet 2.0, service providers and equipment manufacturers can reach vast numbers of locations locally, regionally and globally via the new wholesale E-Access services and standardized engineering over distance for multiple classes of service.” The wholesale market breathed a sigh of relief, because CE 2.0 provided a common global terminology, making it so much easier to extend their offering to new territories without all the time and cost penalties of trying to line up and test compatibility of diverse Ethernet offerings.
But does that mean that services can now be offered without the need for testing? The answer is both yes and no – as Carsten Rossenhoevel, Managing Director of EANTC (European Advanced Network Test Center) explained: “When you buy any new car, you don’t need to check if it has brakes, lights and all the basic features of a car, because there are basic standards of safety and roadworthiness taken care of by industry standards and certifications. But when it comes to choosing a vehicle for a specific purpose – SUV, sports car, family saloon or whatever – then you make your choice on the basis of further evaluation, such as published road performance tests and an individual test drive.”
Similarly with E-Access: it provides a standard for wholesale Ethernet services connectivity without lengthy customization of the interface, but then you need to consider the customer’s actual end-to-end service needs. There could be specific requirements for latency, for high reliability, for low jitter, for high bandwidth, scalability etc. that lie beyond the E-Access standard – just as a family saloon’s roadworthiness certificate is no guarantee that it would be suitable for off-road work.
Levels of testing
“It helps to distinguish three levels of testing” explained Rajesh Rajamani, senior product marketing manager at Spirent Communications: “Conformance Testing, Functional Testing and Performance testing”.
Conformance testing is to test that the network or service conforms to a required standard. This is the testing that has already taken place if the service is certified for E-Access but, as Rajamani points out, the rate of churn and upgrade in most networks means that it is nevertheless worth running regular conformance tests to make sure that the E-Access Certified label still applies.
Functional testing drills down to specific service demands, such as whether the connection would be suitable for video streaming, for VoIP, disaster recovery or other customer requirements. This is where the service provider can begin to differentiate their offering and target specific markets. It is also true that a service may be based entirely upon equipment that has itself been certified to recognized standards, and has been tested for performance; but when the network is assembled it turns out that the whole is less than the sum of its parts. A substantial part of service quality issues in networks based on mature standards such as Carrier Ethernet are based on network configurations. Again it is necessary to test the service end-to-end and not just part by part.
Once it has been determined that the network can meet these service demands, the next question is: will it meet those demands? It is one thing to deliver massive bandwidth under ideal conditions, quite another to ensure that the bandwidth will be available to thousands of endpoints, day in day out under a whole range of everyday working conditions or even under extreme load, fault conditions or cyber attacks.
Performance testing addresses this difference between what can and what will be delivered, and it requires a lot of experience to first decide what are the right questions to ask, before testing for answers. Today’s performance test devices are able to recreate realistic operating conditions in the laboratory: these include realistic everyday traffic that can be scaled to simulate extreme “rush-hour” conditions, as well as possible faults. Note that “realistic conditions” is not just a question of superimposing different types of data traffic, but also of recreating their different patterns: for example a video stream is a continuous high bandwidth demand, whereas VoIP comes in irregular two-way bursts, as people speak or are silent.
If it is important to know how the service will work under attacks, then cloud-based test procedures can connect to a database of the latest cyber attacks and malware that is constantly being updated to include every likely attack condition.
The possibilities for performance testing are endless, unfortunately, so professional skill and considerable experience are needed to find the right balance between ideal performance, acceptable performance and likely operating conditions, in order to create truly practical and cost-efficient test processes.
Delivering a useful business service goes beyond providing a connection that meets certain conformance, functionality and performance requirements, the service must also be manageable.
As Rossenhoevel explains, it is one thing to make an interoperable connection, but quite another to ensure that Operations, Administration and Maintenance functions (OAM) are actively supported across the network: “If a single provider’s service is not working, you can expect them to troubleshoot without delay. But if the faulty service crosses several provider networks, often a lot of time has been wasted in analysis, because service providers did not have the right tools to identify the root cause quickly. CE 2.0 now includes the necessary means for inter-provider OAM.”
For Madhan Panchaksharam, Senior Product Manager at Veryx Technologies, the problem is not just about maintaining service levels and interoperability while circuit configurations keep evolving with on-going operational and equipment upgrades – there is also the challenge of complexity. “CE 2.0 is generally well understood by technically strong teams such as Network Engineering and Network Architecture. However, teams involved day-to-day in testing and turning up services as well as monitoring and troubleshooting – typically the network operations teams – often have a limited understanding of these definitions.”
Madhan Panchaksharam continues: “Although getting their services certified is a big step towards MEF CE 2.0 network integrity, often service providers solely rely on either ITU-T Y.1564 or RFC 2544 during service activation. While necessary, these tests prove to be insufficient to ensure carrier grade service delivery network-wide.”
Cloud service providers in particular want to manage Carrier Ethernet services holistically – being able to predict where capacity and performance enhancements are needed – so that customers can remain confident that SLAs will be met.
Best testing practice
So conformance testing is only the first step, as Madhan Panchaksharam explains: “Experience shows there are a number of problems that lie undetected until subscribers start using their services. For instance, the common cause of most customer reported issues points to configuration mismatches and equipment interoperability issues resulting in problems relating to VLAN preservation, CoS Label preservation, MTU handling, burst handling, Port security and control packet handling. This is because RFC 2544/Y.1564 is focused on performance parameter verification. It does not ensure many of the functional aspects as described by MEF.”
As Carsten Rossenhoevel points out, MTU handling is included in CE 2.0 certification to the minimum legacy IEEE standards requirement (packet size of 1526 bytes). Today’s Business and Cloud Ethernet services usually require 2000 bytes MTU at minimum. CE2.0 conformance testing creates a level playing field at minimum requirements level; individual functional and performance testing ensures that vendors and service providers can meet advanced customer requirements. .
Service providers should always run a verification check on CE 2.0 attributes every time a new service is turned up, rather than have to troubleshoot issues that arise later – especially if the service spans multiple providers. CE 2.0 provides a common standard language that helps clarify communication and reduce the tendency to waste time blaming other operators for problems.
What’s more, there is a need for realistic performance testing in terms of customers’ service level expectations and agreements. The majority of business applications need assurance of continued performance during normal operating conditions, but do not necessarily need to operate faultlessly in extreme or crisis situations. It is often more efficient simply to know the performance limits and to have a strategy in place to deal with crises, rather than spend a fortune making the system totally bomb proof. Performance testing provides guidance on what the system can manage and on how it might fail – so how best to plan around failure.
Testing to this level is not a simple matter, and would take a lot of time and effort if attempted manually. Automated testing becomes essential, not only to make sure the tests can be performed quickly but also to reduce the need for skilled personnel. Once testing becomes a burden it will slow down service delivery or else be side-lined for quick results.
Automated testing is a game-changer, because it makes it conceivable to monitor performance at near line rate – for example, running constant continuity checks with loop-back signals to trigger automated fault management so the system can divert traffic or self-heal in less time than any human operator could detect the fault.
Finally it is also vital to stick to standard CE 2.0 definitions and terminology when reporting the results of these tests. As a standard language, MEF terminology provides a common language for wholesale partners to share their test results and accelerate troubleshooting in multi-operator networks.
The burgeoning complexity of multi-carrier cloud services would be a daunting prospect were it not for the MEF’s work in defining basic E-Access specifications and providing a common CE 2.0 language and criteria. Industry has responded with a wealth of sophisticated network test services and automated test devices that make fast, simple or on-going testing a practical, cost-effective proposition.
This is more important than ever for, as Rajamani concludes: “Failure in the field is very expensive. With rapid growth in Internet traffic, plus constant churn and system upgrades, it is more important than ever to deploy equipment that is reliable and to provide service guarantees equal to, or even better than, those that business traditionally expects.”
Telecom Ramblings is a media partner for the MEF’s GEN14 conference this November.