There was an interesting article over on ArsTechnica late yesterday. Apparently the fiber network supplying bandwidth to the Large Hadron Collider (LHC) and processing sites is really opening some eyes. That’s even better news than the fact that the Earth didn’t collapse into its own personal black hole when they turned the thing on! The article supplies some details on the bandwidth involved:
“Because the networking is going so well, filling the pipes can outrun tapes,” von Rueden told Ars. Right now, that network is operating at 10 times its planned capacity, with 11 dedicated connections operating at 10Gbps, and another two held in reserve.
Hmm, 130Gbps just feeding into a research site, operating at over 80% of capacity, who woulda thunk it? Then there’s this bit:
The original plan had been that each of the Tier 2 sites would keep a specific subset of the LHC data, and analysis jobs (which, being code, should be relatively compact) would be sent across the network to wherever the data resides. Instead, it’s turned out that the network performs so well that the data can be streamed anywhere on the grid in real time, which has made things significantly more flexible.
That’s a heck of a lot of data to stream in real time, though I’m sure there aren’t that many people on the other end of that stream. I must admit, I’ve always thought of academic research bandwidth as, well, mostly theoretical. But if they’re using this much in practice and have been impressed by the performance of their network, I wonder if they’ll find something useful to do that uses 100Gbps connections rather soon.
If you haven't already, please take our Reader Survey! Just 3 questions to help us better understand who is reading Telecom Ramblings so we can serve you better!Categories: Internet Traffic