Industry Spotlight: Iceotope’s Nathan Blom on Precision Liquid Cooling

December 19th, 2023 by · Leave a Comment

When we talk about the energy side of the data center business, it is often from the facility level down to hot and cold aisles and such.  But with the demands of next generation chips aimed at AI and ever-higher density computing, the need for a revolution in cooling at the component level is growing.  One company looking to take on that challenge is Iceotope, which specializes in Precision Liquid Cooling.  With us today to talk about just what that is and where it fits in the ecosystem of today and the future is Nathan Blom, Chief Commercial Officer at Iceotope.

TR: What is your background, and what drew you to Iceotope?

NB: I’ve been in the IT hardware business for some time now, spending the better part of a decade at HP and then most recently at Lenovo. At Lenovo, I was exposed to alternative cooling technologies about 5-6 years ago, but it didn’t seem imminent and it kind of fell into the background. But in the last couple of years, I began to start looking at where things were headed from a technology perspective due to the heat that was being generated by the next generation Intel, AMD, and NVIDIA platforms. It became evident that we’re going to have to make a change in how we build our ecosystem from a cooling perspective. When I was exposed to Iceotope’s technology, I thought it was by far the most elegant and advanced version of cooling outside of air. So when the opportunity to join Iceotope opened up, I jumped at it.

TR: Tell me about Iceotope’s approach.  How does it differ from other cooling technologies?

NB: Iceotope is about Precision Liquid Cooling. It is not immersion cooling, in which you surround the entire ecosystem with liquid with the hope that it will exchange heat in a more efficient way than air will. It does that, but it is imprecise. It relies on the hope that if you provide enough liquid, you’ll come in contact with every component within the ecosystem and transfer that heat. What Iceotope does is deliver exactly the right amount of fluid to each component within the chassis: the CPU, the GPU, the memory, the SSDs, the power supplies, etc. We use a very small amount of liquid and the chassis is not immersed. We are able to 100% transfer the heat because we’re using the liquid to cascade across each component, and we don’t have to reinforce floors to handle the weight of lots of liquid. We use a common form factor of a vertical chassis, as we do with air, which means it’s fully serviceable and scalable. Of course, the ultimate result is that it’s sustainable because it uses 40% less energy and 90% less water than a classic air-cooled data center environment.

TR: That sounds like it would require a great deal of customization in coordination with the companies making those servers?

NB: At our core, Iceotope is an engineering and IT company. A majority of the people who work at Iceotope are engineers, and we spend our days working hand in hand with Intel, HPE, Lenovo, the other OEMs, the ODMs, etc. to make sure that we provide the absolute best solution for every form factor. We are able to apply our technology across the entire ecosystem. It turns out that we require minimal customization and more importantly, our technology enables the design of a new generation of IT that is not based on airflow and air channels for cooling.

TR: Does the data center itself have to have the right water infrastructure available as well?

NB: One advantage of our technology is that we can still use air exchange if needed.  We use a liquid-to-air heat exchanger if there is not a facility water loop in place. If a data center is not quite ready to transition into a water loop within their facility, we can absolutely still operate in a hybridized environment.  If they are running air-cooled servers, they can run our technology in the same rack or next to that rack with no impairment to the function of either environment. But many data centers are already starting to future-proof and recognize that they will have to use a water loop at the facility level and transition from evaporative cooling outside to dry coolers.  It is a huge problem to use evaporative cooling in certain geographies where governments are starting to crack down on water usage by data centers.

TR: What use cases are you seeing implemented in the field right now?

NB: A really broad spectrum across many industries, honestly. The hyperscalers are very interested because they can scale saving one kilowatt of electricity across so many data centers. It becomes a very meaningful number very quickly. We’re seeing it within the telco space, both in the data center and at the edge.  It is partially driven by sustainability goals in the data center within the industry and by governments, but also because at the edge they are struggling with the amount of power that they can pull into certain sites. They want to utilize as much of those electrons for compute rather than cooling. The high-performance computing market, specifically with the growth of AI, where things are getting very hot, very fast, is definitely being driven towards alternative cooling technologies also. There are  a whole series of other companies that, whether it be because of sustainability goals, government mandates, or simply a desire to save operational expenses, are reaching out to us.

TR: Are there particular geographies that are most ripe for switching to a technology like this?

NB: A really hot and humid environment is generally where you have the least efficiency within your data center. You can have a PUE over 2.0 just because you’re running air handlers constantly to cool the air in the data center. The efficiency is maximized very quickly when you can get down to a PUE of 1.05 or 1.06. But also in more temperate climates where you don’t have to use any active cooling, if you’re able to go to a water loop, our inlet temperature is 40 degrees Celsius, which is relatively hot. If your ambient temperature outside is less than 40 degrees, you don’t even have to use electricity for that heat exchange and you can just passively reject heat into the air. Of course, we’ve got some places where it’s cold much of the year, for example in the upper Midwest in the US or north or central Europe, where heat recapture becomes a real requirement as well.  They can take the heat that’s being generated and use it for building heat, water heating, etc., making it ultra-sustainable.

TR: Why have we not seen this technology developed earlier?

NB: I think we got caught up on a couple of technologies that kept people from using their imagination in a better way. We got caught up in liquid cold plate technology, which is a relatively old technology of moving water across a heat exchanger in a chip.  That helps to be more efficient on cooling one individual component, but it’s like a hybrid car — you still have two engines inside.  Then people figured out that dielectric fluid is a pretty good conductor of heat and started the tank or immersion methodology, which distracted the market for a while. Then our company up in northern England had an idea and was able to patent that idea, but we have remained relatively quiet about it for a while to make sure we got it right. Now we’re getting affirmation from the major players within IT ecosystems, and it’s time for us now to make it a commercial technology.

TR: Where are we now on the adoption curve?

NB: As a company we are right there at the inflection point, and I think the industry is too. I think as you look at the roadmaps of Intel, AMD and NVIDIA, you know that the next generation is starting to push that threshold at the top of air-cooled limits. People are talking about how to cool a one-kilowatt chip. For Iceotope’s technology, this is easy and already done. When we’re starting to get these server environments pushing 1.5 or 2 kilowatt chips, , air is just simply incapable of capturing and rejecting that heat. We have to look at alternative cooling, and because the industry is starting to test things seriously, they’re recognizing the faults of some other technologies.  That is opening them to examine Precision Liquid Cooling in a more serious way.

TR: In what directions do you see Precision Liquid Cooling going to meet future challenges?

NB: We have some interesting partnerships in development that are going to help us to adapt this technology and to refine it in a way that lowers the barrier for people to make the leap. Today, if you’re a major server OEM or a small ODM, you lay out a motherboard in a way to try to maximize performance and reduce latency, but also to make sure that components don’t overlap in such a way that airflow will put hot air from one element to another. If you no longer have to think about that because each component can be its own discrete thermal environment that is cooled independently of everything else, it changes the entire way that you could lay out a board. That could change the entire ecosystem for how you design a server or a JBOD or a switch or whatever.  It could be relatively revolutionary over time as you step backwards into the design cycle, even changing how we think about x86 as a platform.

TR: How much of a learning curve is there for techs to start working with such hardware?

NB: That’s one of the great advantages of being in a chassis-based vertical rack system.  You can just pull it out on rails and open the cover just like you would with an air-cooled. Because there’s so little fluid in there, the fluid almost becomes inconsequential. What we have designed then is the manifold that sits over the top of each of those components and delivers the fluid gravitationally.  It will simply pop on and off with your fingers. So there really isn’t a ton of additional training that needs to be done. You don’t need special equipment. You don’t need cranes. You don’t need any of that. You just operate like you normally would.

TR: Where do you think we will see signs of movement toward Precision Liquid Cooling?

NB: The transition is coming. It’s inevitable. The days of simply spinning fans faster and trying to make the air inside of data centers colder will come to an end. The first adopters of alternatives are going to be those that have some sort of external force causing them to do it.  That could be the heat of the ecosystem created from the chips or some external entity like a government forcing their hand on electricity or water consumption.  Precision Liquid Cooling is the technology that will be the least disruptive but the most effective. You will have the ability to save a bunch of electricity, reduce costs, meet your sustainability goals for real — not by planting seedlings in the Amazon as an offset, but actually materially reducing your carbon footprint. Keep an eye on the hyperscalers. They have one of the biggest incentives to make big moves quickly. I would also be on the lookout for what the OEMs are starting.  They are nervous that their next generations will produce a wonderfully powerful chip that may not be able to operate at full power because it’s too hot. They have a vested interest in making sure that people consider alternatives, and I think they’re going to be the most vocal and the most obvious leaders in making the transition to alternative cooling.

TR: Thank you for talking with Telecom Ramblings!

If you haven't already, please take our Reader Survey! Just 3 questions to help us better understand who is reading Telecom Ramblings so we can serve you better!

Categories: Datacenter · Energy · Industry Spotlight

Discuss this Post


Leave a Comment

You may Log In to post a comment, or fill in the form to post anonymously.





  • Ramblings’ Jobs

    Post a Job - Just $99/30days
  • Event Calendar