The road to 100G and beyond

June 24th, 2008 by · 7 Comments

In a response to my exaflood post a regular commenter, Frank Coluccio, brought up a subject that I had skirted over but which deserves discussion.  Specifically, he mentioned  that the way in which we reach for 100G+ matters a great deal, and is currently in dispute.  This is a difficult subject to talk about without descending into the netherworld of technical acronyms never to return, but I’m going to try anyway.

In order to keep up with bandwidth demand, providers will need to hook up bigger pipes – nobody disputes that.  It wouldn’t make sense to stay with 10G as the biggest pipe when one has a 10Tbps network, one would have to add 100 new pipes to increase capacity 10% – totally insane and impossible to manage.  So we *must* go to 100G and then on to larger pipes.  The size of the basic unit must be reasonable compared to the size of the network.

But there are two ways to go about it.  We could increase the density by putting put more data in each and every wavelength – this is the optical purists approach.  Or we can take a bunch of 10G wavelengths, aggregate them, and package them so nobody can tell what we did – this is the practical engineer’s approach. In the next few posts, I’ll try to lay out the advantages, drawbacks, and implications of each, and then I’ll give my own WAG on who wins.

If you haven't already, please take our Reader Survey! Just 3 questions to help us better understand who is reading Telecom Ramblings so we can serve you better!

Categories: Internet Backbones · Telecom Equipment

Join the Discussion!

7 Comments So Far


  • I think the answer is pretty straightforward. The need for 100G in the next 3-5 years is driven by dense 10GE transport. Putting 100G on 10 lambdas solves nothing. There needs to be a muxing scheme that makes it cost effective to transport 10x10GbE. The new modulation schemes proposed in the IEEE using QPSK and two optical polarizations are how it will happen.

    Infinera has demo’ed a 100G solution using the 10x010G wavelength scheme you mention but I don’t think it solves anything. Infinera solved the problem by just making dense 10G transport cheap.

    The demand for 40G is driven mostly for the same reason. If the new modulation scheme can be worked out, it is possible 40G simply never happens.

  • craigp says:

    There are different variables at play here and as always “it depends”. Carriers rich in fiber and not as concerned with spectral efficiency may look at an Infinera 10×10 as an acceptable solution in the near term. The client signal over the next couple of years may still be 10G until 100G is realized so 10×10 may give you better economics in the short run. Same reason why many carriers deploy 4x10G rather than 40G, it simply costs less and link aggregation efficiency has improved and the operational implications have not been quantified yet as reaching a tipping point where 40G makes more sense even with a premium. Now, not all companies have made this decision, clearly others have moved to 40G already.

    I do agree that at some point you have to increase that basic unit otherwise operationally there are massive headaches and that may mean accepting a premium (penalty?) for 100G to realize the operational simplification.

  • Frank A. Coluccio says:

    It’s great that you’ve resurrected this discussion, Rob. I’d like to add to Andrew’s and craigp’s comments, if I may, stipulating that this is a topic demanding not one conversation, but many conversations.

    The barriers to 100G are not limited to merely being able to justify the costs of 4x or 10x , but also, and perhaps more importantly, grounded in some very elusive and oftentimes time-varying (dynamic) physics, as well, and require satisfying a myriad of architectural roles beyond those of mere ballistics.

    In a trailer to a lengthier treatment I sent to the Gilder Forum yesterday, which I titled, “40Gbps? 100Gbps? 1Tbps? Where’s the Beef?”, I concluded:

    “The solution to many of these problems lies in devising methods to overcome a number of anomalies related to polarization and other forms of dispersion that become unmanageable beyond 10 to 20 Gbps (actually, 10 to 20 giga_baud_), which would allow for continued graduations in bit rates of RZ and/or NRZ formats, which few vendors have been able to do both economically and consistently. FAC”

    In retrospect, the formats mustn’t be RZ nor NRZ, since the actual format isn’t as important as adherence to a single format, if all other things remain equal and any-to-any wavelength exchange weighs heavily in the ultimate goal.

    The OIF purports to be striving for such a single format, although thus far it has been vague concerning the desired calculus, whether it will be 40-40-10, or 100 out of the box, or some other formula, and of course it must align with IEEE 100G 802. xx and ITU OTN bundles, too.

    One OIF release I read only yesterday “suggests” it might be a straightaway 100G per lambda solution, but this would presuppose solving the physics first, which only a couple, maybe three or four, entities have claimed the ability to do in an economical manner for reaches beyond the metro and regional. And those seem invariably to require expensive back-to-back mux-demux provisioning in repeater huts, albeit at improved distances.

    Regretably, a few entities that turned out to be mortalities during the optical winter were pursuing solutions to these problems, anticipating the issues we’re now discussing here today, and some of them, from what I’ve learned over time acually did, claiming, at the time, to have passed proof of concept with testimonials from research centers backing them up. But the institutional nature of their remnants, and the rights to their patents, were either sold or in some other way compromised. The whereabouts and ownership of all but one of them, which I can’t say too much about, constitute an enigma to me – just as the larger issue of this discussion remains an enigma in many ways.

  • Skibare says:

    being a “”Hack” at this investing stuff and given my track record and Jim Crowes RESULTS, does Level3 have a chance of leading this long awaited 100G Bandwidth Surge in Demand????

    Bottom line to reading and investing is WHICH HORSE do you put your jockey on for the TriFecta?????????? tia

  • Frank A. Coluccio says:

    Some questions raised at Cannes, courtesy of fibresystems.org:

    40G: it’s time for a reality check
    Jun 24, 2008

    The mood here in Cannes at IIR’s WDM & Next Generation Optical Networking conference is definitely upbeat, but there’s one topic where carriers are sounding a note of caution: 40G.

    Cont: http://tinyurl.com/6krgnx

Leave a Comment

You may Log In to post a comment, or fill in the form to post anonymously.





  • Ramblings’ Jobs

    Post a Job - Just $99/30days
  • Event Calendar