This Industry Viewpoint was authored by Chris Brill, Field CTO at Myriad360
The traditional data center playbook is dead. Companies are inventing entirely new categories of infrastructure solutions in real time, because the old ways can’t scale fast enough to meet demand.
The Waiting Game That Broke the System
I’m hearing the same story from clients right now: they go to a data center provider looking for space and power to run AI infrastructure. The provider says sure we can help you—in six months. Nine months. A year. Some are quoting 18 months out.
But the data centers are waiting on power companies. And the power companies are saying it’ll be a year or two before they can get new plants online. So, you’ve got this cascade of bottlenecks. Power companies can’t deliver to data centers. Data centers can’t deliver to AI customers. And the AI customers are sitting at the very end of the pipeline, waiting. Demand is moving faster than any part of the supply chain can respond.
Going Straight to the Source
What happens when the utilities can’t flex to accommodate you? You go straight to the source.
One major cloud provider bought a nuclear power plant outright. That’s an extreme example, but it signals where this is heading. In Texas and across the Southern states, I’m working with data center providers who are building facilities directly on top of natural gas reserves. That’s a complete rethinking of how this industry has worked for decades. Instead of asking “where’s power available?” companies are asking “where’s fuel available?”
I’m seeing the same thing in Europe—wind farms being built specifically to power individual data centers, solar installations purpose-built for single facilities. Rather than waiting on the grid infrastructure to be built, they’re building their own from the ground up.
When the Builders Say No
Even when you solve the power problem, you hit the next wall. Construction companies that traditionally build data centers are at capacity. They’re telling new clients: we can’t help you. We’re booked out. We don’t have the resources.
The response has been modular construction. Shipping containers are custom-built to order with power equipment and cooling infrastructure inside. If you’ve got an alternative energy source and you’ve got ground to lay equipment on, companies will bring in container pods and connect it all together. From above, it doesn’t look like your typical four-walls data center. It looks like a bunch of shipping containers with infrastructure running between them—modular data centers assembled on site because traditional construction can’t keep pace.
The Thermal Engineering Problem
Once you get inside the facility, the engineering challenges multiply. Each successive GPU generation pushes the power envelope further—from 50–75 kilowatts per rack two generations ago, to 125 kilowatts with the current generation, to 175–200 kilowatts expected next. The electrical engineering design now feeding power into a single rack is the same design that traditionally fed an entire office building. Same voltage, same amperage. Data center electrical engineers are looking at this and saying it’s beyond what they work with. Large-scale industrial electrical engineers are coming into design rack-level power delivery.
This gets very hot. The standard air-cooling model that worked for decades can’t handle these thermal loads. Data centers are telling clients: we can provide you with some cooling, but not at the level you need. So, AI infrastructure teams are layering on their own solutions.
First: rear-door heat exchangers. Data centers provide a water supply—really a water-glycol mixture because straight water fouls and freezes—and you run it through radiators on the back of your racks to augment the air cooling. Then teams found they could do better with a secondary loop. Primary loop from the data center, heat exchanger in the middle, secondary loop feeding your equipment. Then warm water and hot water cooling—pump 75-degree water faster instead of 55-degree water slower, maximizing how much thermal load you extract from that facility water supply.
Now they’re exploring two-phase cooling. The physics of boiling—the state change from liquid to vapor—pulls heat more efficiently than pumping liquid past a hot surface. It’s called latent heat: the act of vaporization draws heat in. Systems are being designed to induce boiling within the cooling loop to maximize heat extraction.
Data centers are adapting too. They’re adding water infrastructure as a fourth utility. You used to pay for space, power, and cooling. Now you’re paying for space, power, cooling, and a metered water supply—either raw municipal water or chilled water at a specific temperature and flow rate.
The Question That’s Coming
All of this innovation is real. But I keep coming back to the same concern: a huge percentage of the AI infrastructure being built right now—I’d estimate two-thirds—is “build first, business second.” Building capacity and assuming demand will grow, without answering a specific question or delivering a specific outcome.
Cost efficiency and environmental impact are taking a backseat to the race for outcomes. At some point, someone’s going to look at the checkbook and ask whether this is sustainable. We haven’t hit that point yet.
But the leading players are already starting to scrutinize the demand side. The major AI platforms have all implemented mixture-of-experts architectures. When you send a prompt, the system doesn’t broadcast it to all available GPUs. It first figures out what area of expertise your question falls into, then routes it to a specialized portion of the model. That’s the beginning of matching resources to the actual work.
The broader industry hasn’t caught up. Most enterprise AI deployments are still building one massive model to handle everything for their organization. That doesn’t scale. Within the next year or two, we’ll see organizations start building purpose-specific infrastructure. A rack dedicated to travel planning. A rack for mathematics. Specialization at the infrastructure level, mirroring what’s already happening in software.
The supply-side workarounds I’ve been describing—the shipping containers, the natural gas builds, the thermal engineering layers—they buy time and keep the industry moving while demand outpaces infrastructure. But the demand-side discipline is what determines whether the math ever actually balances. To get it right, we need both.
Chris Brill is Field CTO at Myriad360, where he helps enterprise IT teams build resilient, high-performance infrastructure strategies. With deep experience in cloud, networking, and data center architecture, he brings clarity to complex technology decisions. Follow Chris on LinkedIn.
If you haven't already, please take our Reader Survey! Just 3 questions to help us better understand who is reading Telecom Ramblings so we can serve you better!
Categories: Artificial Intelligence · Datacenter · Industry Viewpoint






Discuss this Post