Last time we looked at pure 100G wavelengths, now let’s look at bandwidth aggregation. In order to reach 100G why not take ten 10G connections, bundle them together, and put it in a nice package so that nobody can tell. If it takes 100G as input and gives 100G as output without losing anything and does it cheaply, isn’t that good enough? Well, as long as you add in the cost of obtaining and lighting many more fibers down the line, sure. This solution decouples the speed of the optics from the speed of service, and depends on miniaturization and integration of existing technology rather than the development of new methods.
- less dependence on scientific breakthroughs
- cost savings based on miniaturization and integration on chips is better understood, it may even follow Moore’s law
- fiber requirements go up rapidly
- at some point the difference between the speed of the optics and the service causes efficiency losses
- more power to light more fibers means your costs increasingly depend on the price of that power, and power costs are hard to predict.
- fiber rich carriers with extra conduits such as Qwest and Level 3 win big
- fiber poor carriers like Cogent or Sprint have to find more fiber on a regular basis.
- If anyone figures out how to put substantially more useful wavelengths on a fiber – say via SOAs (silicon optical amplifiers) in a feasible manner, then everything gets alot easier.
If you haven't already, please take our Reader Survey! Just 3 questions to help us better understand who is reading Telecom Ramblings so we can serve you better!Categories: Fiber optic cable · Internet Backbones · Telecom Equipment