Revolutionising power delivery architecture for future AI server racks

29-05-2025 | Infineon | Power

Infineon Technologies AG is revolutionising the power delivery architecture needed for future AI data centres. In collaboration with NVIDIA, the company is developing the next generation of power systems based on a new architecture with central power generation of 800V high-voltage direct current (HVDC). The new system architecture significantly enhances energy-efficient power distribution across the data centre and enables power conversion directly at the AI chip (GPU) within the server board. Its expertise in power conversion solutions from grid to core based on all relevant semiconductor materials Si, SiC and GaN is accelerating the roadmap to a full-scale HVDC architecture.

This revolutionary step paves the way for the implementation of advanced power delivery architectures in accelerated computing data centres and will further improve reliability and efficiency. As AI data centres are already utilising more than 100,000 individual GPUs, the demand for more efficient power delivery is becoming increasingly important. AI data centres will require power outputs of one megawatt (MW) or more per IT rack by the end of the decade. Thus, the HVDC architecture, coupled with high-density multiphase solutions, will set a new standard for the industry, driving the development of high-quality components and power distribution systems.

"Infineon is driving innovation in artificial intelligence," said Adam White, division president of Power and Sensor Systems at Infineon. "The combination of Infineon's application and system know-how in powering AI from grid to core, combined with NVIDIA's world-leading expertise in accelerated computing, paves the way for a new standard for power architecture in AI data centres to enable faster, more efficient and scalable AI infrastructure."

"The new 800V HVDC system architecture delivers high reliability, energy-efficient power distribution across the data centre," said Gabriele Gorla, vice president of system engineering at NVIDIA. "Through this innovative approach, NVIDIA is able to optimise the energy consumption of our advanced AI infrastructure, which supports our commitment to sustainability while also delivering the performance and scalability required for the next generation of AI workloads."

At present, the power supply in AI data centres is decentralised. This means that the AI chips are supplied with power by a large number of PSUs. The future system architecture will be centralised, making the best possible use of the constraint space in a server rack. This will increase the importance of leading-edge power semiconductor solutions that use the fewest power conversion stages and permit upgrades to even higher distribution voltages.

sebastian_springall.jpg

By Seb Springall

Seb Springall is a seasoned editor at Electropages, specialising in the product news sections. With a keen eye for the latest advancements in the tech industry, Seb curates and oversees content that highlights cutting-edge technologies and market trends.