While traditional data centers use between 10 and 25 MW of power, demand by hyperscale AI factories can exceed 100 MW. Data centers are growing rapidly in size, with next-generation facilities projected to require 1 GW or more of power.
The growth in AI workloads is changing the way chips and server racks are built within data centers. Over the next few years, we will see a shift from today’s already powerful kilowatt-scale racks to 1 MW racks in some AI factories. Each rack will consume about the same amount of electricity as 1,000 average households.
Traditional low-voltage power delivery is no longer sufficient for modern server rooms. Increasing power delivery efficiently and at scale requires major advancements in power distribution technologies and architecture.
Leading AI chip makers, tech giants, data center operators, and power system providers are collaborating on solutions. Nvidia recently put a stake in the ground on applying a higher voltage approach by proposing an 800 VDC architecture.
The idea of a higher-voltage DC data center is not new. We were discussing 300 VDC architectures over a decade ago. But today, the urgency is vastly greater. The transition from kilowatt-scale racks to gigawatt-scale data centers, fueled by massive GPU deployments, is forcing a more aggressive approach to power system upgrades.
To sustain this trajectory, the tech industry can no longer rely on incremental improvements to data center power systems. We must make bold changes, quickly, to deliver the power needed to fuel and sustain this growth.
The Problem: Too Many Conversions, Too Much Waste
The traditional data center power train is limited by traditional AC power delivery. Electricity flows from the utility as medium-voltage AC, but before it reaches the racks, it goes through a multi-stage process of conversion: from AC to DC for the uninterruptible power supply and battery backup, DC back to AC for facility distribution, and finally, AC back to low-voltage DC at the rack. A typical legacy rack power system involves five or more conversion steps, and, in each conversion, precious energy is lost.
For every lost watt of power, you need to expend even more energy to cool the heat it generates. As rack power density surpasses the current 100 kW threshold and approaches the projected one-megawatt mark, these losses become unsustainable.
Data center operators’ main interest is no longer saving space; it’s about energy efficiency. One of the most direct ways to save electricity is to eliminate the AC/DC transformations across the power chain. Eliminating three of these conversions, for example, can improve end-to-end energy efficiency by 3% to 5% at the gigawatt scale, resulting in a difference that can amount to tens of millions of dollars in electricity savings.
The Solution: Direct, Higher-Voltage DC
Many in the industry believe a prudent way to combat this loss of energy within the process is to directly distribute higher-voltage DC power within the data center and to the rack. Nvidia is focused on 800 VDC, but the tech industry is also considering 1000 VDC or 1500 VDC in the future.
The transition is driven by the laws of physics. By raising the voltage of the primary power distribution bus, the current required to deliver the same amount of power is drastically reduced. Lower current allows for significantly thinner conductors and busways, resulting in less copper usage. This may not sound like much, but it could be about 200 kilograms less per 1 MW rack for the busbars alone. Scaling this to a 1 GW data center could save up to half a million tons of copper.
Transitioning to higher-voltage DC distribution is a transformative upgrade. It represents a paradigm shift demanding a torrent of exciting and disruptive innovation. Some fundamental building blocks don’t commercially exist today. The architecture presents technical hurdles, including safety concerns and challenging operating conditions. The industry must align on common voltage ranges, connector interfaces, and safety practices.
One of the most important technologies necessary to make the high-voltage DC infrastructure viable is to convert the 13.8 kV AC grid power to 800 VDC at the data center perimeter. This requires an advanced type of electrical transformer that uses power semiconductor devices and advanced control circuitry to manage and transform electric power. Twenty years ago, having a device that could run at such high voltages was feasible, but highly complex, bulky, and expensive. However, there is now considerable active research and development in this area, and companies have developed testing prototypes.
The second piece of the road map is distributing the DC power to the racks. Data centers can’t use AC circuit breakers to do this. DC protection is surprisingly more difficult than AC protection. A traditional AC breaker can interrupt current easily because the AC waveform crosses zero every few milliseconds. In a DC system, the current never crosses zero. To interrupt a fault in a high-voltage DC system, you need a device that can break that current instantly and without generating massive thermal losses during normal operation. The best way to protect against short circuits is to use advanced semiconductor technology to create a solid-state circuit breaker that is fit for purpose.
Meeting AI’s Energy Demands Starts Now
AI is one of the biggest stories in the energy world today because it is fundamentally an electrical challenge, not just a computing one. With the push for AI dominance at warp speed, it is up to electrification partners to develop these foundational components quickly, reliably, and with the deep technical expertise needed to ensure safety and performance.