The Heat Is On - part 1: Cooling the AI Revolution, from Fans to Fluids
Every time you ping ChatGPT, a data centre somewhere gulps more power than your air-con on a 40°C Aussie summer day. AI’s compute hunger is turning data centres into thermal warzones, forcing cooling tech to leap from clunky fans to servers submerged in ocean depths. Now, with AI cranking the thermostat to 11, we’re at a crossroads. Let’s unpack the sprint from air to liquid cooling, NVIDIA’s liquid-first GB300, the ARM vs. x86 showdown, and what it means for Australia’s data centre scene - while championing sustainability.
Note: this started as a thought a few months ago and has evolved so there is more here than originally planned… Sorted? Ok let’s crack on!
First-Gen Air: When PUE Was a Dirty Word (2000–2010)
In the early 2000s, data centres were energy power hogs, and Power Usage Effectiveness (PUE) was the metric nobody wanted to talk about. PUE measures how much energy a data centre consumes compared to what actually powers the servers (ideal is 1.0). Most facilities rocked a dismal 2.5 - half the electricity was lost to cooling and overheads. Hot-aisle/cold-aisle setups, raised floors, and CRAC units roaring like V8s were standard. Virtualisation crammed more VMs onto servers, but the heat spiked faster than a BBQ in January. Energy bills ballooned, and operators started sweating. (further reading )
Chasing 1.0: The Air-Cooling Arms Race (2010–2016)

Liquid Enters Stage Left (2016–2020)
Liquid cooling crashed the party. Rear-door heat exchangers and cold-plate loops piped coolant to chips, slashing cooling energy by 30-40%. Coolant Distribution Units (CDUs) became the new MVPs, managing flow without flooding the server room, but retrofitting air-cooled sites was a wallet-draining slog. In Australia, NextDC dipped toes into liquid for HPC, but the upfront costs scared off most. Immersion cooling loomed: if pipes worked, dunking the whole server must be next level, right? (further reading)
Microsoft Goes Swimming – Project Natick (2018–2020)
Microsoft went full Aquaman with Project Natick, sinking 864 servers (27 petabytes) 117 feet under the Orkney waves. Ocean water cooled for free, hardware failures dropped 8x, and land-based water use was zilch. It was a PR masterstroke, but logistics - hauling pods for maintenance - and recycling woes sank it by 2020. Still, it proved liquid could tame extreme heat, even if Aussie coastal regulators would balk at server reefs. text
Cannonball! The Immersion-Cooling Gold Rush (2020–2024)
Immersion cooling hit like a tidal wave. Servers swim in single-phase (liquid stays liquid) or two-phase (liquid boils to gas, then condenses) dielectric fluids, in open baths or sealed tanks. OCP and ASHRAE set standards, and vendors like Submer and GRCooling scaled up. PUEs go below 1.1, and Aussie HPC hubs like Pawsey Supercomputing Centre tested immersion for GPU workloads.

Costs are easing - fluids are cheaper, tanks more common - but it’s still niche outside HPC. Enter unicorns like ResetData, who launched an IaaS platform built on liquid immersion, proving it’s cost-effective and sustainable, with zero water waste. ResetData’s partnership with Centuria Capital Group, an Australian real estate investment firm, enables purpose-built facilities designed for immersion - no retrofitting nightmares. Launching in Feb 2025, these AI factories, designed to NVIDIA’s reference architectures, push boundaries while prioritising sustainability. Challenges like lead times and conversions persist, but the concept’s proven - customers can now make truly conscious choices, not just buy carbon offsets. (more reading & more again)
Next up in this series of Australia’s AI future, we’ll consider how changing architecture is making heat a bottleneck.