This Industry Viewpoint was authored by Scott Sumner, VP Strategic Marketing, Accedian
Market forces in the enterprise business services space are driving providers to adapt their delivery models.They’re moving away from single-function, proprietary hardware appliances in favour of virtualized platforms centrally hosted in the cloud to minimize the costs and limitations of on-premises equipment.
Deploying virtualized customer premises equipment (vCPE)—with network functions (VNFs) running on commercial off-the-shelf (COTS) hardware—has the potential to reduce costs, improve margins, and boost service innovation. But, like most things, the devil is in the details.
Before getting into the current state of the vCPE space, and how operators should navigate the complexities of adopting this technology, it’s worth stepping back to ask: Why virtualize customer premises equipment in the first place?
The short answer is that installing, managing, and maintaining equipment at the customer premises is complicated and expensive—for the provider and the customer alike. There are huge advantages to streamlining the whole equation by centralizing service management and reducing CPE as much as possible.
Although providers might see vCPE deployment for SMB customers as the most straightforward application at the moment—simply because smaller businesses have less sophisticated service requirements—the vCPE model is starting to take shape and roll out to large enterprises as well.
As an early use case for software defined networking (SDN) and network functions virtualization (NFV), vCPE supports the long-term transformation of providers from connectivity suppliers to value-added service enablers. The deployment of vCPE will have a big impact on service provider infrastructure and operations—and that’s a good, necessary thing.
In a competitive, rapidly changing market, providers can only get so far for so long by cutting costs. Their long-term sustainability depends on increasing revenue by providing differentiated value. So, although vCPE benefits like reduced capex and opex are frequently touted, the industry is coming to realize that other factors are just as—if not more--important. For service providers, these other factors include:
- Faster time to market
- More rapid service innovation
- Reduced management complexity
- Fewer site visits
Providers obviously benefit when their customers are also better off. How might that play out with vCPE? Long desired attributes—faster, simpler service installation; access to rapid service innovation; new, lower-cost pricing models; more control over service configuration options; and more reliable service—are now becoming possible, and customers are taking notice.
Now, some observations about the state of the vCPE space today:
- Many vendors are ready with vCPE products.
- vCPE does not completely eliminate the need for some service functionality at the customer premises (often dependent on customer or application-specific requirements).
- Network demarcation needs to be considered separately from service demarcation, as a reliable connection to the customer site is essential—all services are delivered over it. Network layer functions are also the most ‘delicate’ to virtualize, as quality of service can be impacted by poor performance in packet forwarding and conditioning.
- vCPE involves three main elements: management and orchestration (MANO) software stack, customer premises platform, and set of VNFs.
- Many early vCPE deployments rely on manual provisioning of VNFs, which is necessary to get started but is an unsustainable model long-term. The maturity of NFV orchestrators is often cited as lagging behind current service provider deployment needs,
- Operators adopting vCPE need to be careful not to inject new operational complexity, given the many hardware and software options available and the management changes involved in changing to a model where hardware is abstracted from the service layer.
Given these factors, how should service providers navigate the tricky waters of vCPE and avoid pitfalls?
Here’s a checklist of sorts:
- Take a long-term, big-picture view of vCPE deployment; it’s a platform for all service delivery, not an approach to delivering individual services.
- Identify which network functions can be effectively virtualized and which might be best left at the customer premises. Ask the question, What performance impact will result from this VNF placement decision?
- Carefully consider the benefits of using intelligent network interface devices (NIDs) to demarc service delivery and monitor QoS and QoE at the customer site. Many operators have concluded that NIDs, or virtualized variants, are still the most efficient method to maintain a performance-assured customer edge.
- Maintain a continuous improvement approach by integrating customer feedback into decisions about transforming the business model.
- Keep in mind the potential of vCPE to improve market position. Differentiation is rapidly becoming all about network flexibility and innovation.
- Avoid confusing customers with too many choices.
Finally, to get right down into the details, let’s take a quick look at the three main options operators have for deploying vCPE and the relative merits of each one:
Localized/uCPE: Virtualize physical appliances at the customer premises, leaving in their place the NFVI and VNFs needed to implement network services. Here, the NID function can be a smart module installed in the compute platform, or integrated as a VNF within the platform.
- Attractive for early NFV-based vCPE deployments.
- Replaces single-function, legacy appliances with a single NFVI instance (COTS appliance + virtualization software stack).
- Risks returning complexity to CPE hardware.
- Inelastic compared with using centralized data centers to host functions.
Centralized: Functions previously hosted on CPE are pulled back into the provider network as VNFs running on NVFI in a data center or edge PoP (e.g. CORD).
- Enables effective sharing/pooling of network resources.
- Hardware remaining on premises is a NID or smart SFP module to demarcate service.
- Simplifies adding compute capacity on an as-needed basis.
- Requires upfront investment in a full NFV infrastructure (data center, pool of COTS hardware resources, NFVI framework, and VNFs).
- Using currently available technology, may not be able to meet latency and security requirements for all services.
Distributed/Hybrid: Place VNFs on an NFVI located either at the customer premises or within the service provider cloud.
- The most flexible of the options discussed here.
- An overlay network between end locations and cloud allows providers to extend managed services outside their geographic footprint.
- Presents a migration path for providers that starts with the uCPE model.
- The most complex of the options discussed here.
- Requires sophisticated policy and orchestration for best placement of each workflow’s VNFs.
Decisions about vCPE deployment inevitably boil down to the question, “Where should VNFs be located”? The answer could be at the customer premises, or centralized in a the data center, or a hybrid of the two. The best answer for a particular provider should be based on the requirements of the use cases involved, potentially with customer input. Performance, security and policy, cost, and operations are all factors to consider.
Conceptually, the centralized vCPE model is the most advantageous for the long term. But, it might not be realistic at the moment, given that technology and standards are still evolving. What’s likely is that many operators will opt for a hybrid approach to vCPE and VNF placement. It’s therefore critical that NIDs or their virtualized demarcation and monitoring function equivalents are in place to cover the eventual transition to full vCPE.Industry Viewpoint · NFV