Data Center Cooling: What Works at 30kW per Rack
Four cooling approaches that actually hold up when rack density crosses the threshold where CRAC units stop coping.

Most data center cooling articles still assume a 5 to 8 kW rack. Those articles stopped being useful about the time anyone asked you to host a GPU workload. The honest conversation about cooling starts at 20 kW per rack and gets interesting at 30 kW, which is where a lot of our customers have landed trying to consolidate VDI farms, run inference on-prem, or squeeze more density out of existing floor space. Here is what we have seen work, what we have seen fail, and what the vendors rarely put in their slide decks.
The Density Curve Nobody Wants to Draw
The first thing worth internalizing: cooling cost does not scale linearly with density. It steps. You can do 5 to 10 kW per rack with perimeter CRAC units and good hot aisle containment. At 15 to 20 kW you need in-row cooling or rear-door heat exchangers. Above 25 kW you are talking about liquid — direct-to-chip, immersion, or a hybrid — whether you want to be or not. Each step change requires mechanical, electrical, and plumbing decisions that cannot be undone cheaply.
This matters because density claims on data sheets are marketing numbers. A CRDH (close-coupled rear-door heat exchanger) spec sheet will tell you it handles 40 kW. It will, under ideal return water temperatures and perfect rack airflow. In a real cage with a messy cabling story and a failed filter, you will see 25.
Approach 1: Hot Aisle Containment Done Properly
Before you spend a dollar on anything exotic, fix your containment. We still walk into data centers in 2022 with chicken-wire ceilings and gaps under the raised floor that bypass half the cold air. Proper containment — sealed ceilings, blanking panels in every empty U, brush grommets on every cable penetration — will typically drop PUE by 0.10 to 0.20 on its own. That is free cooling.
The thing most operators get wrong: containment is a discipline, not a project. It degrades every time someone pulls a server and does not re-blank the opening. We audit cages quarterly with thermal cameras and find hot spots caused entirely by missing panels and unsealed penetrations. Fix those before anyone says the word "immersion."
When containment runs out of headroom
Containment is a ceiling, not a multiplier. Once you pass roughly 12 kW per rack with perimeter cooling, you cannot get enough cold air to the inlet fast enough regardless of how sealed the aisle is. The air has to come closer to the load.
Approach 2: In-Row and Rear-Door Cooling
In-row cooling (APC InRow, Vertiv Liebert, Stulz CyberRow) puts a cooling unit between racks. Rear-door heat exchangers (CoolIT, Motivair, nVent) hang a water-cooled door on the back of each rack and capture the heat before it enters the room. Both move the heat exchange closer to the source, and both work well in the 15 to 30 kW range.
The trade is plumbing. You now need chilled water loops running through the white space, which means leak detection, drip trays, shutoff valves that actually work, and an operations team that is comfortable with the idea of water a meter away from production hardware. We have never had a leak take down a customer workload, but we have had several false alarms from humidity sensors, and we have replaced more ball valves than I care to count.
The rear-door math
Rear-door exchangers are our default recommendation for customers densifying an existing hall. You do not have to re-architect the raised floor, you do not need a wholesale redesign of airflow, and you can deploy them rack by rack as density grows. The downside is water temperature: to get their full rated capacity you want 18 to 22 degrees Celsius supply water, which can require an upgrade to the chiller plant if the building was designed for traditional 7 degree Celsius chilled water loops.
Approach 3: Direct Liquid Cooling (DLC)
Once you are running H100s or dense CPU servers at 700 watts per socket, the only way to keep junction temperatures in spec is to put coolant on the die. DLC means a cold plate on the CPU and GPU, plumbed into a coolant distribution unit (CDU) that exchanges heat with facility water.
This is no longer exotic. It is the standard build for AI racks shipping today, and every major server vendor has factory DLC options. The honest reality: it works extremely well when the integration is done right, and it is a nightmare when it is not. The failure modes are not cooling failures. They are human failures — wrong fittings, wrong torque on quick-disconnects, wrong glycol mixture, wrong labeling when a tech pulls a server for warranty swap.
What we tell customers about DLC
Budget for training and runbooks, not just hardware. Liquid cooling does not forgive sloppy operations. On the other hand, it recovers heat at 40 to 55 degrees Celsius supply temperatures, which means you can do dry cooler or free-cooling year-round in most climates and cut mechanical cooling energy by 60 to 90 percent compared to legacy chilled water.
Approach 4: Immersion Cooling
Two-phase and single-phase immersion submerge servers in a dielectric fluid. The case for immersion is strong on paper: PUE under 1.05, astonishing density (100+ kW per tank), no fans, quiet rooms. We have installed it, operated it, and watched customers fall in and out of love with it.
The case against is practical. Service is messier. Firmware updates and component swaps mean pulling a dripping server from a tank and waiting for fluid to drain. Warranty stories with OEMs are still inconsistent — some vendors now certify their servers for immersion, some void warranty the moment you dunk them. Supply chain for the dielectric fluid itself has been volatile. And the floor loading for a full immersion tank is significantly higher than a traditional rack, which rules out many existing raised-floor data halls without structural work.
Where immersion makes sense
Greenfield builds for AI training, crypto mining, and HPC where every watt of cooling energy matters and the workload mix is narrow enough that service disruption is a known, planned event. For a mixed enterprise hall with a rotating server population? Rear-door or DLC is almost always a better answer today.
What We Actually Deploy
Our default stack for a modern enterprise data hall targeted at 20 to 35 kW per rack: proper hot aisle containment, raised water supply temperature (18 to 22 C), rear-door heat exchangers on dense racks, direct-to-chip DLC on GPU racks, and a free-cooling economizer on the chiller plant that runs eight to ten months a year in most of the markets we serve. PUE comes in around 1.15 to 1.25 sustained, which is the realistic target for a production hall that has to serve a mixed workload.
Three Things Most Cooling Articles Get Wrong
- Density targets are not cooling targets. You design for peak rack power, not average. A 30 kW design point needs cooling headroom above 30 kW or the first hot spot knocks you offline.
- Water temperature is the real lever. Raising supply water from 7 C to 20 C unlocks free cooling and makes DLC and rear-door systems hit their rated capacity. Most buildings were not designed for warm-water operation and retrofits are non-trivial.
- Operations eat innovation for breakfast. The fanciest cooling topology in the world fails fast when the ops team has not been trained on leak response, drain procedures, and containment discipline. Pick the technology your team can operate on their worst day, not their best.
Talk with us about your infrastructure
Schedule a consultation with a solutions architect.
Schedule a Consultation