The exaflood – to fear or to rejoice?

June 20th, 2008 by · 5 Comments

Suppose the core backbones are growing at 60% a year.  If 10G pipes became ‘too small’ to manage current backbones in 2007, then when does 40G become too small?  Math says in just 2 years, in 2010 (can you say AT&T?).  Any guesses how many years it takes to make 100G too small?  Only 2 years later in 2012 those 100G circuits we don’t even have yet won’t be enough any more.  And how about the fabled Terabit (Tbps?)  Too small in 2017.  Exponentials are like that.  As Infinera’s Singh pointed out at NXTcomm08 the other day, it has taken 8 years since 10G came out to make 40G economically feasible.  If technological advancement doesn’t speed up, the ‘exaflood’ really will break the internet like AT&T says it will.  Scary, eh?

But this is just the dilemma of Malthus, that food production cannot keep pace with population growth and therefore a collapse is inevitable, but now restated in a technological format.  And the assumptions behind it are, in my opinion, just as wrong this time.  Technological development of 40G and 100G has lagged *because* they didn’t make sense yet.  If internet growth applies pressure, there will be big economic payoffs to those who solve the problem – hence the problem will get solved.  If it is slow, then prices will go up for bandwidth and growth might slow – but then speed up again with the next advance.  And at least a few mega billionaires will emerge from those advances – it’s just the nature of the beast.

Optics has been a tough place to be for a long time.  Some carriers may fear the so called ‘exaflood’ because it is their business models which will have to bear the strain any mismatch between demand and what technology can supply.  But guys like Infinera’s Singh hope for it, root for it, even revel in it, because they know that pressure from internet growth is the only thing will make optics fun again like it was 8-10 years ago.  And if you can’t tell that Singh is having fun right now, you aren’t looking carefully…  He’s like the first boy at the top of a monster sledding hill the morning after a big snow, and he’s got all day.

If you haven't already, please take our Reader Survey! Just 3 questions to help us better understand who is reading Telecom Ramblings so we can serve you better!

Categories: Internet Backbones

Join the Discussion!

5 Comments So Far


  • siddiqui says:

    if singh is so sure about exaflood(?) why is he selling his stocks, specially when they’ve bagged a fish as big as DT? give me a break ali baba.

  • Frank A. Coluccio says:

    Hi Rob. As usual, you’ve presented another vert interesting, IMO, set of questions, some of which I’d venture to guess most active posters here, and lurkers, alike, have themselves at one time or another thought to ask, but never took the time to articulate 😉

    I once chaired a thread on the Compuserve Telecommunications Forum that ran for over a year titled TNB: Total National Bandwidth. Questions and issues similar to the ones you’ve asked in your top post were beaten to death a dozen different ways from Tuesday, but alas no resolutions were ever reached. TNB took place in 1995-6, just prior to the advent of commercial-grade DWDM products being unleashed on the universe, being introduced for the first time en masse at SUPERCOMM96 in September of that year.

    During that era an OC-192 loomed as large as a T3 did in 1984, or as humongous as a T1 appeared in 1966 when DTE signaling rates stood around 600 bits/sec, shortly after the N.A. T-1 Digital Hierarchy and CCITT E-1 formats were being spelled out for all to see. If you look back to those earlier times you’ll find that no one really knew what to expect once digital technologies were set to be exploited, not even those who were mapping the hierarchies at the time.

    For example, DS3s, which we normally associate with T3 WAN systems/lines were not intended to be WAN services, at all. Instead, they were factored into the digital hierarchy as central-office multiplexing stages between DS2s and DS4s, where seven DS2s emptied into a single DS3, and six DS3s interleaved to form a DS4 at 274 Mbps in support of a T4 WAN line that was to be (and for a short while actually was) carried over metro and regional distances over coaxial cable and microwave radio facilities.

    As it turned out, of course, the granularity of T3s became more economically appealing as a unit of bandwidth currency, especially when fiber was seen imminent during the mid-Seventies. Then we saw a flip-flop taking place between the DS3 and DS4 stages, when he DS4 was repurposed as a multiplexing stage only in CO terminal gear for the higher line rates that followed. Enough of my digressing, though. You asked:

    “Suppose the core backbones are growing at 60% a year. If 10G pipes became ‘too small’ to manage current backbones in 2007, then when does 40G become too small? Math says in just 2 years, in 2010 (can you say AT&T?). ”

    Your question appears straightforward enough, but under closer evaluation in the face of what is actually taking place today it appears you are presupposing certain facts that are not what really what they seem to be.

    By this I mean that, unlike earlier jumps in bandwidth that involved increased signaling rates (RZ and NRZ), that is, up until the time OC192 became common, the future appears paved with upgrades that, for the most part seem to be using aggregated 10G wavelengths rather than employing new single-wavelength technologies of intrinsically higher signaling rates. Of course, exceptions exist using phase and quadrature-like, analog modulation schemes, which is a topic I think needs more airing here, in general, due to the long range implications of so many disparate formats, but suffice it to say that the latter do not lend themselves to the type of nodal interchange that time-slotted processing functions (similar to those used by INFR) employ.

    Here’s a question of my own I’d like to ask at this point: What will the mix of modulation schemes look like in three years for 100Gbps flows, given the choice between:

    1. RZ/NRZ;
    2. Analog/quadrature (by this I mean all analog modem like approaches); or
    3. Multi-wavelength aggregation?

    Anyone care to guess?

    Returning to my original line of thought: Thus, in the greater preponderance of cases where 40G and 100G flows are being announced, there’s been no real, or appreciable, increase in deployed lit capacity since many of those flows consist of aggregated traffic that would have been destined for lower-order wavelengths, anyway. And unless I’m mistaken, this appears to hold true for Infinera as well – thus far.

    One could equate many of these multiplexing approaches to traffic shifting, or wireline grooming, rather than enabling true incremental gains in overall capacity, if you see what I’m saying here.

    Ironically, single-wavelength upgrades that would otherwise employ RZ and NRZ formats can be seen potentially flourishing best in the metro/access/short-distance regional nets, where the effects of PMD and CD tend to be manageable, but at the same time more easily achieved without resorting to extraordinary technological fixes.

    But in the longer-reaches of LH and ULH, we continue to see either exotic analog schemes or wavelength-aggregation taking place, and those do not add up in my book to more capacity, but again, merely shift traffic from ten smaller pipes onto four or one or a hundred larger ones that would have been picking up the slack anyway.

    Perhaps through these means greater capacity could be effected, if they support the redesigning of DWDM grids to allow more wavelengths, which trend I believe we’ve begun to see already but the size of the jumps in “overall” lit network capacity might just as easily be achieved by remaining at 10 Gbps at the single wavelength level a while longer, until the larger problems associated with non-linear channel properties and anomalies are corrected.. Thoughts?

  • Frank A. Coluccio says:

    ps – I failed to give due attention to the real need for aggregating lower order 10G lines, but isn’t that what is happening here, in any case? Only it’s taking place under the guise of ‘increasing’ capacity by moving to higher combined bandwidth ratings?

    Consider the explanation for this seemingly-incongruent last statement of mine as, the use of inverse muxes instead of nodal switches, which, when you look at some of the nodal architectures of the silicon we’ve been discussing, is what you have anyway. But that still doesn’t add up to overall gains in lit capacity, if I’m simply combining channels that would be lit anyway. Comments, corrections welcome.

  • Frank A. Coluccio says:

    Erratum:

    I don’t know if anyone caught the error I committed out of haste in Para. 8 in Comment No. 2, above, but it should have read as follows:

    “Of course, exceptions exist using phase, frequency and quadrature-like, analog modulation schemes, which is a topic I think needs more airing here, in general, due to the long range implications of so many disparate formats. In any case, suffice it to say that _most_ of the latter formats do not lend themselves to the type of any-to-any nodal interchange that identically formatted wavelengths can support transparently — UNLIKE many of the dissimilar- and uniquely-proprietary- wavelength modulation formats employed by most vendors’ approaches today.”

Leave a Comment

You may Log In to post a comment, or fill in the form to post anonymously.





  • Ramblings’ Jobs

    Post a Job - Just $99/30days
  • Event Calendar