It may seem like the data center space is nothing but cavernous AI facilities these days, but the enterprise data center opportunity is still alive and well out there in the marketplace. ValorC3, which recently rebranded from Tonaquint, is one of the newer entrants assembling a portfolio in the space. With us today is ValorC3 CEO Jim Buie. Jim joined the company in October after holding leadership positions at Involta (now ark data centers), ViaWest (now Flexential), and Comcast.
TR: Where does ValorC3 fit into the data center ecosystem?
JB: Our thesis at ValorC3 is to continue to build data centers in emerging markets, and we like to think of ValorC3 as Valor anywhere. Wherever the demand goes, we’re looking to fill that for enterprise needs ranging from 5MW to 40MW.
ValorC3 is really the merger of a couple of companies, Tonaquint in St. George, Utah, and EdgeX in Oklahoma City, Oklahoma, and our investor is CVC DIF, which has over $20 billion in assets under management. They brought me on board to expand the business. With all the data center demand in the marketplace, I am excited for the next decade. I think we have a lot of growth ahead of us.
TR: What does the company’s infrastructure look like today?
JB: Our Oklahoma City facility has 5MW of capacity. The St. George facility has been around for a long time, and we have a few megawatts there. In addition, we just opened 750kW of new footprint in Boise, Idaho. As we move forward, we’ll open data centers with about 5MW of initial inventory with expandability to 40MW of capability.
We are targeting AI workloads for enterprise as well as a lot of the traditional needs, such as on-premise ERP systems. Most people don’t realize SAP on-premise is actually one of the fastest-growing on-premise applications these days. And that transcends a lot of industries, including manufacturing and financial services. Everybody has spent so much time on hyperscale that people forget about the enterprise needs in the marketplace. So that’s where our focus will be.
TR: AI-related data center development has been dominating the conversation. How can the enterprise side of things compete for attention?
JB: I think that is the challenge. From conversations with banks, their loan portfolios are becoming very concentrated. There are singular hyperscale projects that require $10B of capital. Yet, we’re finding that enterprises are having difficulty finding a location for their high-performance computer applications. That’s the problem we’re here to solve: find a nice home for all that server equipment without getting boxed out by all of the larger demand that’s in the marketplace, all while meeting challenges such as adequate utility supply. We want to ensure that we provide a good home for servers for enterprise clients to run mission-critical workloads.
TR: What type of demand are you seeing from enterprises in today’s market?
JB: Whether it’s traditional workloads or not, they are also using high-performance compute servers that require liquid cooling to solve business problems. For example, manufacturing might be trying to continuously improve their manufacturing process via simulation or even their own AI inference deployment.
People like their data and want to know where it’s at. They want it to run in a data center where they can physically go to touch it. We are there to help provide locations and inventory to do that. With all the growing demand, enterprises in this market sometimes have a hard time finding a viable location to put those servers.
TR: What other geographies is ValorC3 interested in entering?
JB: In Q2, I think we’ll be able to announce a couple of new markets. We’re mostly in the West today, but the Midwest is a prime area for us, and we will look to continue expanding down into the Southwest. We are going where our clients are asking us to go. A lot of the high-performance compute needs have less geographical preference, which means they can choose locations with favorable utility rates, or perhaps tax benefits or economic incentives from a community. We’ve studied over 800 DMAs (Designated Market Areas) that we like, but it’ll really be prioritized. Our expansion strategy will follow our clients and where they want to be. So where we have one or two clients with a geographical preference, we’ll go into those markets. But it won’t be markets like Northern Virginia in any case. We’ll be in some larger markets, but we will continue to focus on emerging and secondary markets.
TR: When you enter a new market, will you build new infrastructure or buy an existing facility?
JB: I think it’ll probably be 50/50 where we’re either acquiring or building from the ground up. In Q2, one of our expansions will be a build from the ground up, and one will be a purchase. And much like EdgeX in Oklahoma City, it will be an existing facility that we can modernize, add liquid cooling to, and provide services in that market. I think there’s a need for both organic and inorganic growth. The M&A strategy is nice because you’re in the market faster and able to serve client demand sooner. Also, you have certainty on the utility power that’s available. The nice thing about building from the ground up is that you can really customize it to the clients’ needs. You can get a bigger tract of land, so if you have one client take up a 5MW data hall, you can expand another 35MW as more client demand in that market occurs. I love building from the ground up because that’s where the value for enterprise clients is. They don’t have the time or energy for site selection.
TR: How do you make the case for enterprises to close those corporate data centers they might have?
JB: There’s not a CFO in the world today who wants to own corporate data centers, much less build one that is really modern in its capability and future-proofed. Friends don’t let friends build data centers. If you’re an enterprise, don’t build a corporate data center in this day and age. Let the professionals do it. We’ll work alongside you to pick the right location, get the best economic outcomes for you, and future-proof it. And just by the definition of multi-tenant outsourcing, you’ll get the advantages of scale versus trying to go it alone.
TR: You mentioned liquid cooling. Have you built out that infrastructure in each of your existing facilities already? How have you gone about it?
JB: Right now, it’s available in Oklahoma City. Going forward, it will be every market that we open up, but our first experience was in Oklahoma City, and it has been interesting. There are a lot of vendor solutions out there. We prefer to do air cooling in rows using in-row coolers or containment up to about 30k W per cabinet, which is about 10 times the density we used to see for cabinets. Then, once we get over that, we can do rear-door exchangers for cooling, or we can do cooling to the chip. We are currently not locked into a vendor . We really want the clients to help us select vendors. There are three or four names that everyone looks at for liquid cooling that lead the marketplace. It’s an emerging market, it is very much still evolving, and we’ll continue to be a fast follower. That’s something we’ve always done in the data center space. There’s always a hot new storage platform or hot new virtualization capability, and we like others to fine-tune it and drive the cost down. Then, we’ll continue to adopt the best-in-class technology as it evolves. The clients dictate exactly what they need at this point, and that will define the vendors in the market.
TR: How closely is the enterprise space following the hyperscalers when it comes to high– density compute and AI? Is the demand rising quickly?
JB: The demand is there. It depends on which analyst you believe, but you could say 70% of enterprises are on a journey with high-performance compute and/or AI. And they’re all thinking about how to use AI or high-performance compute to compete. We like to talk about relentless enterprises, which are leaders in their industry. Whether it’s a healthcare organization that’s really focused on individualized patient care and needs high-performance compute to assimilate all the information to provide better individualized care, or a manufacturer simulating rather than testing their product to save money, these things require the new chipsets and high-performance compute. Such workloads are running in the neighborhood of 50kW per cabinet, which is not quite as much as AI workloads of 100kW per cabinet or higher. They’re just solving business problems, and they want more compute capability to run new applications. There’s a little bit of an arms race in the enterprise space of people just trying to solve problems with technology and using high-performance compute to do it. It’s fun to watch.
TR: Where do you see AI applications themselves appearing in the enterprise market?
JB: For AI platforms, the data sovereignty and security preferences for enterprises are very interesting. Deep Seek was a big Wall Street-impacting announcement, and the feedback I got from enterprises from that release was that they would never put their secret sauce or most critical data into a cloud platform, much less one that’s backed by China. They want to own their own server equipment in a data center where they know where it’s located and want that data protected. Enterprises want a private deployment of AI where they can use all the available toolsets. But they want control of that data so that it’s not breached by a competitor or in any other way. That’s where we can help.
For enterprises, it’s still the early, early innings. When you look at any of the studies of enterprise deployments, the majority continue to be more traditional ERP and EHR (Electronic Health Record) systems. I’d say it’s probably a 10-to-1 ratio right now. I think that’s going to level out over the next five years as enterprises continue to deploy AI or high-performance compute for these various challenges that they’re facing.
TR: Thank you for talking with Telecom Ramblings!
If you haven't already, please take our Reader Survey! Just 3 questions to help us better understand who is reading Telecom Ramblings so we can serve you better!
Categories: Artificial Intelligence · Cloud Computing · Datacenter
Discuss this Post