AI data centers: the road to 1 megawatt per rack explained cover art

AI data centers: the road to 1 megawatt per rack explained

AI data centers: the road to 1 megawatt per rack explained

Listen for free

View show details

About this listen

For this episode of the Techzine TV podcast, we discuss the evolution of data center architecture driven by AI workloads Steve Carlini, Chief Advocate for AI in Data Centers at Schneider Electric. From 5 kilowatts to 1 megawatt per rack, this conversation explores the technical challenges and innovations in this industry.

Key topics include the shift from CPU to GPU-based computing, the move to 800V DC power distribution, liquid cooling requirements, and how data centers are becoming grid stabilization assets. We also dig a bit deeper into things like microfluidic cooling, photonics integration, and why power densities are skyrocketing with each new GPU generation from Nvidia.

Learn how Schneider Electric collaborates with chip manufacturers to design power and cooling systems six months ahead of new GPU releases, why the water consumption issues that data centers have are a temporary thing, and how SMRs (Small Modular Reactors) could transform data center energy infrastructure.

Topics
0:00 - Introduction
1:33 - Road to 1 megawatt per rack
5:19 - Leap-frogging to 800V DC architecture
7:23 - Liquid cooling requirements
9:43 - Future of data center design
12:15 - Water usage and cooling loops
14:03 - SMRs and grid stabilization
17:01 - Microfluidic cooling technology
18:08 - Flexible power allocation models

No reviews yet
In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.