A week ago today, the joint hurricane/nor’easter called Sandy slammed into the NY/NJ/CT/MD/DE/PA region, leaving millions without power and vast tracts of low lying areas near the ocean in varying levels of devastation. Much of the less severe damage to the telecommunications infrastructure has now been repaired, and perhaps now it’s time to note Sandy has taught us. Here are a few such items from my point of view, I welcome any additions:
Power really is everything. The vast majority of telecommunications outages in Sandy’s wake came from loss of power in some form, whether it be at cell towers, data centers, or at consumer homes. Ring topologies, redundant gear, protected circuits – they all did their jobs for the most part, it was the widespread power problems that made it look bad. We’ve worked so hard to build out fiber and tower nationwide that sometimes we forget that the utility power grid it depends on is so vulnerable to disruption – whether it be the big pipes feeding data centers or the little ones feeding towers or individual homes. Power has been the center of the data center universe for some time, which is why most had adequate diesel backups and prearranged fuel deliveries this past week. But the rest of the infrastructure was not so well prepared.
We need some battery magic. Think about how far batteries have come in the consumer segment over the last few years, powering our laptops and tablets and smartphones for ever more hours for a hundred bucks and a few ounces. And you’re telling me that we still can’t store more than 24 hours of backup power to a cell tower cost effectively? Ok, so vendors have been working real hard on scaling LTE and backhaul and all that, but now maybe it’s time to put some more engineers to work hardening all that infrastructure against simple power failures. Operators need to improve their designs, and vendors need to access the latest in battery technology to give them more options. We need to scale more than just bandwidth.
Raise those pumps and generators. Ok, so without naming names, some of the datacenters that went all the way down last week apparently did so because they got water in their basements. Keeping your backup power in the basement in a low lying area next to the coast is an obvious design flaw whether it’s NYC, Florida, Japan, New Orleans, or whatever. Water is the lifeblood of civilization, but we must not forget its potential as a destructive power. Hopefully Sandy’s storm surge has sufficiently reminded folks that we don’t build backup systems just to satisfy spreadsheets and hedge against hiccups in the power grid but to defend against real physical threats.
The storm gets a few hours blame, service providers will inevitably get the rest. Six, maybe twelve hours after a disaster is your grace period with customers, whether consumer or enterprise. After that the public relations disaster grows by the disconnected hour. It doesn’t matter that you have no control over utility power restoration. It doesn’t matter that consumer choices themselves have left them so dependent solely on omnipresent wireless connectivity. It doesn’t matter that they weren’t prepared, only that you weren’t.
None of this is an indictment, I happen to believe that overall the industry did quite well this time, all things considered. But whether you chalk it up to global warming or not, the more dependent society gets on rich connectivity the less tolerant customers will be when it comes to their providers’ resilience in the face of Mother Nature. Consider it an opportunity rather than a burden, because if you’re sufficiently prepared relative to the competition then you will win new customers because of it. It’s a good thing for telecom infrastructure that what we provide is so important to people, all we have to do is figure out how to keep them as connected as they want to be.
If you have an item to add to this list that fits, I will happily add it verbatim, with or without attribution (your choice). Either leave a comment below or send it to me via email.
If you haven't already, please take our Reader Survey! Just 3 questions to help us better understand who is reading Telecom Ramblings so we can serve you better!
Categories: Fiber Networks · Weather · Wireless
you have to take into account building, fire, and safety codes. much of that equipment was in the basement because it could not be placed anywhere else: either due to cost or more often codes n’ rules.
it’s far too naive to simply say “well that’s stupid!”
A good point, then perhaps it’s the building codes that need to be fixed in such cases.
What happens when the fuel trucks can’t access buildings and transportation is hindered. It all has a domino effect. That was another issue for Data Center operators who were running generators. Without access to electrical pumps and other electricity generated devices – fuel can’t be delivered where it needs to go. They simply ran out of fuel.
Regarding the desire for 24 hours of backup power at all cell sites. That is a lot of additional capex that is only needed during an extraordinary event such as sandy.
If the cost is greater than the lost profit on calls/data for the duration of the event then it is not economically viable to deploy.
After almost 30 years in the business it never fails to amaze me how bureaucrats think they know how things should be done, and then are the first to scream and holler when it doesn’t fit their perfect world… ( kinda like democrats LOL )
I’d expect Sprint’s Network Vision sites to have higher up-times due to newer gear that should have lower power requirements.
Things aren’t always about what’s economical, but what’s right to do. I try to build all of my sites with 2 or 3 days of battery.
Can’t get fuel in via truck? Boat it in! 😉