The road to 100G and beyond – bandwidth aggregation

June 25th, 2008 by · 1 Comment

Last time we looked at pure 100G wavelengths, now let’s look at bandwidth aggregation.  In order to reach 100G why not take ten 10G connections, bundle them together, and put it in a nice package so that nobody can tell.  If it takes 100G as input and gives 100G as output without losing anything and does it cheaply, isn’t that good enough?  Well, as long as you add in the cost of obtaining and lighting many more fibers down the line, sure.  This solution decouples the speed of the optics from the speed of service, and depends on miniaturization and integration of existing technology rather than the development of new methods.


  • less dependence on scientific breakthroughs
  • cost savings based on miniaturization and integration on chips is better understood, it may even follow Moore’s law


  • fiber requirements go up rapidly
  • at some point the difference between the speed of the optics and the service causes efficiency losses
  • more power to light more fibers means your costs increasingly depend on the price of that power, and power costs are hard to predict.


  • fiber rich carriers with extra conduits such as Qwest and Level 3 win big
  • fiber poor carriers like Cogent or Sprint have to find more fiber on a regular basis.


  • If anyone figures out how to put substantially more useful wavelengths on a fiber – say via SOAs (silicon optical amplifiers) in a feasible manner, then everything gets alot easier.

If you haven't already, please take our Reader Survey! Just 3 questions to help us better understand who is reading Telecom Ramblings so we can serve you better!

Categories: Fiber optic cable · Internet Backbones · Telecom Equipment

Join the Discussion!

1 Comment, Add Yours!

  • Frank A. Coluccio says:

    Rob, you have essentially touched on, and in the process reiterated, a major consideration without expressly stating it in some many terms. And that is: Link aggregation, when aggregating dissimilar flows, in many ways (but not all) has the same effect from an architectural standpoint, hence effectively replaces, end-point bandwidth aggregation ‘switches’, which is the more commonly understood approach to dealing with an over-preponderance of lower order flows. This has traditionally been the criterion for moving the next power of ten in switching hierarchies, hence the need for higher orders of data bearing capacity, as well, leading to the following of the x10 multiplier rule, specifically.

    It still doesn’t introduce any advantage where the need for more capacity is the goal, but it makes the problem of plant administration more manageable. Similar advantages were conceived during the Seventies when digital access and cross connect switches (DACS, DCS, etc.) were first introduced, and later we saw this on the order of what Infinera accomplishes today in architectures that Ciena and others employed, only without the sophistication of electrical methods of mitigating the effects of optical dispersion. I wish I had more time ….

Leave a Comment

You may Log In to post a comment, or fill in the form to post anonymously.

  • Ramblings’ Jobs

    Post a Job - Just $99/30days
  • Event Calendar