Industry Spotlight: Dartpoints CEO Scott Willis Aims at Enterprise AI

June 9th, 2025 by · Leave a Comment

With the rise of AI infrastructure over the last couple years, all parts of the data center industry have shifted in that direction.  That means lots of very high density colocation and a whole lot of power to keep it running.  However, the vast majority of the focus has been at the hyperscale level, with the enterprise opportunity getting second shrift.  But what we once called the edge is still out there, and AI is coming there too.  One company looking to fill the gap is DartPoints, whose current footprint spans 11 markets in the Midwest, the Carolinas, and Louisiana.  With us today for a return visit is DartPoints CEO Scott Willis, whom we spoke to in December of 2020.

TR: It has been three years or so since we last talked.  How has DartPoints’ strategy evolved?

SW: Our strategy has evolved but the core thesis hasn’t, though.  We’re still very focused on emerging tier two, three, and smaller markets.  But we are in the process of transitioning the business on the tailwinds of AI, HPC, and just denser workloads.  We see a vacuum in the market: a lot of the traditional players that serve this space have all been moving up to the hypers for the last 12 to 18 months, if not longer. We are finding a real vacuum in the market for 10 to 40 megawatts of space, particularly around the enterprise. We are not abandoning the markets we have focused on, and we are by no means abandoning cloud. That is a very profitable, very successful business for us and we want to lean into it. But today, we are slightly over half cloud versus colo, and if we’re successful over the next three years we are going to fall back and look like a very space and power company because of the size of revenue associated with these kinds of deals. We are a platform that’s well-positioned for that, and we think that the markets that we’re targeting are going to be very strong benefactors of those kind of workloads

TR: Why are enterprise workloads moving in that direction?

SW: AI inference is becoming more prevalent, and the roll down of inference is workloads that enterprises are actually going to deploy into their businesses.  Whether it’s efficiency, productivity, customer growth or success, retention, call center, customer care, or pick your workload use case, that’s going to be done on a more localized level.  That’s where we’re going to deploy capital.

TR: How will you approach those investments?

SW: The three legs of value creation that I’m very focused on do not change. 1) I want to grow the business organically at or above market expectations.  It is important to me that we do that for our investors. 2) I want to deploy success-based capital into the business. If we have to add capacity, compute, storage, etc., we want to get an acceptable rate of return. And 3) We will continue to leverage M&A for growth, for geographical expansion, and for density within our existing footprint. But I do believe that in the 10 to 40MW range, M&A is going to look a little bit different for us in the next 1 to 3 years. We’re going to be more interested in sites that add capacity, either strategically moving out geographically, or densifying within where we are.

TR: What kind of M&A opportunities would attract your attention the most?

SW: If I find a data center that’s got a few customers in it that maybe has 5 megawatts but has an ability to add another 15 or 20 megawatts off the substation, that’s a target for me. I want to buy it and deploy capital into it. I want to enable that 25 megawatts to be able to capture that larger enterprise segment that we’re going after. Listen, if a “business” versus a “site” comes available that strategically fits with us, obviously, we’re going to participate in that too.  But my priority around M&A is more around strategic site acquisition that can give me a time-to-market advantage, versus a standard three-year project on a greenfield.  Time to market is the currency for me. I can go in and repurpose an existing facility in 10 to 15 months. If I can turn that into a state-of-the-art liquid deployed, direct-to-chip environment that’s going to support 20, 30, 40, 50, or even 70kW per rack, that’s where I’m going.  My customer base is not Google or AWS. Those guys will plan two, three, four, and in some cases, five years out. My enterprise customers won’t do that. What I’m finding is that the shelf life of an opportunity in my pipeline is around 6 months. I might get 9, and if I get lucky, I get 12. Otherwise, that opportunity is going to go find another home.

TR: What type of enterprises are you starting to see the most traction with?

SW: It’s a hybrid between AI, and largely inference, workloads. A handful of enterprises are deploying GPUs in our data center. I am not suggesting we are anywhere near seeing that at scale, but I do think that will grow. It’s incubating. But we believe that CPUs, particularly in the upper end of the enterprise, are still going to have a long life.  CPUs are going to have quite a runway before the entire market, particularly the enterprise, converts over to an entire GPU IT environment. The next segment is AI companies out there that are actually trying to create enterprise workloads that they want to sell. They’re not all going to make it, which means this is a riskier customer segment than, say, Google, and so we know that we’ve got to be cautious there. Third is the larger enterprises that are either headquartered or have large locations in and around our facilities with denser workloads that have to be a little more location-specific.

There is a segment in that enterprise market where two years ago, you couldn’t sell somebody unless it was geographically sensitive. But nowadays, because capacity is so constrained, we’re seeing customers that are on the West Coast or in the deep Midwest that will look to put a workload in Greenville, Cincinnati, Columbus, or Indiana because of capacity. They need the power and space more than they need their engineer inside of 50 miles. We’re going to ride that and we’ll have to see how sustainable that is, because customers still very much like to make decisions in and around where their employees are.  But we do believe that that as AI workloads become more location-sensitive and specific, I think we will look for opportunities to do business with the hypers. We have them today, such as AWS in Baton Rouge, Louisiana and Columbus, Indiana for specific workloads. But we’re not at scale yet.

TR: Are there any geographical areas that DartPoints would like to expand into? 

SW: We certainly would like to move a little bit further west. I’d be happy to densify further northeast, and further southeast. We’ve done our modeling, and we’ve got plus or minus 108, 109, 112 markets where we’ve done basic geographical research. We look for emerging markets, economic robustness, fiber connectivity, power, water, and gas, and that is our sandbox.  We are not looking for data centers in Dallas or Chicago or Northern Virginia. I won’t say never, but I also don’t know if I’m going to go look for a data center in Phoenix, Arizona, because I have no leverage or synergy.  I’d have to hire a complete new operations team and a new go-to-market team. I’d have to reestablish relationships with my channel partners.  All of that means delay. I would prefer either to densify where I can leverage my existing resources.  And if I want to extend a little bit further west, that is a contiguous extension of where I am.

TR: How will you meet rising demand while maintaining that swift ‘time to market’?  What does your ecosystem look like?

SW: We are trying to streamline our partner ecosystem.  We have MSAs and contracts that we’re putting in place. We’re trying to use partners that have the scale and the capacity to support us on a multi-site basis, and for whom it is exciting enough for them that they will look at that as a big enough opportunity that they want to invest resources to be able to support us.

TR: Do you foresee any supply chain issues coming up, given the current broader political environment?

SW: The short answer to that is yes. There’s no question there’s risk, and we are trying to hedge against that a little bit.  But you’ve got much bigger CEOs running much bigger organizations that have much greater risk than we do. I try to pay attention to how they think about it and what they’re looking because I think that will roll down to someone like me. Right now, with OEM vendors we’re not seeing a lot of initial pass-along of those costs to us. That could change tomorrow, though.  It’s a very competitive space. They want to win. They want to grow. And a number of the larger ones are seeing the value in the mid-market. We are looking at best-in-class solutions that we want to deploy, and then unfortunately, I’m having to add workload to not only our vendors, but my internal team to hedge against that: I want US-based manufacturers. It may not be the vendors you want to go with, but I want to know what that solution looks like, so that I can try to de-risk the tariff situation.  We’re just trying to think of all the options on how we hedge around all the risks that we see in the greater macro level of what we’re dealing with.

TR: Thank you for talking with Telecom Ramblings!

If you haven't already, please take our Reader Survey! Just 3 questions to help us better understand who is reading Telecom Ramblings so we can serve you better!

Categories: Cloud Computing · Datacenter · Industry Spotlight

Discuss this Post


Leave a Comment

You may Log In to post a comment, or fill in the form to post anonymously.





  • Ramblings’ Jobs

    Post a Job - Just $99/30days
  • Event Calendar