Nyquist Capital has a fantastic article on the prospects of 40G technology and the underlying trends, I recommend a read. A dynamic I’d tack on is one brought up by a reader of this blog in a private message that has been rattling around in my brain for a week or two. Back in the days of the bubble, a top of the line DWDM setup had 32x10G waves and the unit of commerce was either FE (100Mbps) or STM-1 (155Mbps). Each major installation of gear supported several thousand such morsels of bandwidth. When a carrier installed a system, it was expected to be ‘it’ for many years to come, it was like buying a car – you don’t do it that often and it feels like a one time expense when you do.
Fast forward to the present: leading systems are at perhaps 800G total capacity, with Infinera putting out vast amounts of PICs at 100G. But quickly the basic unit of commerce is becoming 10G. The bandwidth chunk that carriers install is not so large with respect to the unit they sell, there has been a shift in order of magnitude. Each customer takes up a larger fraction of any particular system. Almost as soon as a carrier sells bandwidth, they have to install more top of the line gear. Now it’s like buying a PC, seems like you need one every year or you fall behind.
What does this mean? It brings the carriers costs and revenues for traffic closer to the same timeframe, it lowers their installed inventory substantially compared to sales volume. And thus there is far less opportunity for a massive price war in bandwidth, because there is far less potential for a glut in installed capacity. So not only are there no deflationary pressures to drive 40G costs lower, but there is almost no chance of such pressures developing in the forseeable future. Carriers like this, after all the deflationary spiral that drove 10G adoption was a horrible experience for them.
Perhaps we can say that moving to 40G waves and 1.6Tbps systems just doesn’t change the equation much, it barely keeps up. Carriers buying 40G gear today justify it based on a shorter payback schedule – if they buy some 40G gear they can still expect that capacity to be mostly in service by the time 100G is ready for prime time in a couple years. Waiting for 100G just so you don’t have to upgrade twice is less of an issue, because you expect to have to buy more gear anyway about that time.
I think there’s also another implication, one that Infinera is trying to capitalize on. Even if one moves to an 8Tbps system (80x100G), carriers do not want to buy bandwidth in 8Tbps chunks. They want to buy in chunks that are large enough to make their networks manageable, but small enough not to introduce excess inventory. Thus, the size of the PIC at 100G rather than 400G or 800G when it came out – 100G was chosen not because it matched some planned ethernet or SONET protocol speed, but because it was the right sized chunk for their customer’s business models. This decoupling implies that packaging is increasing in importance, and whereas that of line speed is decreasing.
Ok, rambling over – but since that’s the name of the blog, you were warned!
If you haven't already, please take our Reader Survey! Just 3 questions to help us better understand who is reading Telecom Ramblings so we can serve you better!
Categories: Fiber optic cable · Internet Backbones · Telecom Equipment
Discuss this Post