Data Centers May Soon Hit 12% of National Electricity Use

The power demands of data centers are expected to increase as the industry prepares for quantum computing. A US Department of Energy analysis funded by the US Department of Energy put the electricity use of data centers at roughly 4.4% of the national total, with projections of up to 12% by 2030. This surge follows a period of rapid expansion driven by artificial intelligence, with power draw per rack jumping from 10 kW to more than 50 kW, and the most aggressive hyperscale designs quoting over 100 kW within five years. Reflecting on historical parallels, the author notes that architectures that integrate easily into data centers will follow a similar timeline to the GPU transition, years, not decades, potentially mirroring the nearly 40 years it took 19th-century factories to redesign around electric motors.

Historical Parallels: Factory Electrification and GPU Adoption

The shift to accommodate artificial intelligence has dramatically reshaped data center power demands; a single rack now draws over 50 kW, with the most aggressive hyperscale designs quoting over 100 kW. This rapid increase echoes a prior industrial transformation, offering valuable insight into the challenges and timelines associated with integrating fundamentally new technologies. In the 19th century, factories were self-contained powerhouses, each equipped with steam plants or waterwheels dictating building design. When the electric grid arrived, productivity gains were not immediate; redesigning factories around distributed electric motors took nearly 40 years. This protracted period highlights that transformative technologies often demand substantial infrastructural overhauls before realizing their full potential. In contrast, the adoption of GPUs occurred within a decade because they slotted into the existing rack architecture. Operators did not require wholesale facility redesigns, allowing for rapid integration and accelerated benefits.

The distinction is not about technological merit, but rather the extent to which a new technology necessitates changes to the surrounding infrastructure. This historical comparison is particularly relevant as quantum computing emerges as a major demand on data center resources. Architectures that integrate easily into data centers will follow a timeline similar to the GPU transition, years, not decades. However, those requiring dedicated facilities will face a longer integration period, mirroring the nearly four-decade factory electrification process. Both approaches could eventually produce useful quantum computing, but they won’t realize its impact at the same speed. Diraq is actively pursuing integration, collaborating with Dell on systems engineering to ensure quantum processors can be readily installed within standard data center environments, recognizing that the value of quantum lies in how cleanly it integrates with the classical stack around it.

Rack-Scale Quantum Processors Enable Faster Integration

This escalating energy consumption is prompting a fundamental re-evaluation of data center design, but another significant shift is on the horizon: the integration of quantum computing. Unlike previous technological advancements, the successful deployment of quantum processors depends not solely on their capabilities, but on how easily they integrate with existing systems. Historical parallels offer valuable insight; the transition to electric motors in 19th-century factories required nearly 40 years for complete redesign, a timeframe that highlights the extensive overhaul sometimes needed for transformative technologies. However, the comparatively swift adoption of GPUs, which slotted into pre-existing rack architectures, demonstrates that integration can dramatically accelerate deployment. The difference between these two stories isn’t the quality of the technology, but how much each one required the surrounding world to change, emphasizing the critical role of compatibility. Most visions of utility-scale quantum computers involve room-sized or warehouse-sized systems similar to the ENIAC of the 1940s.

But an alternative is emerging: quantum processors designed from the outset as rack-mounted accelerators that are sized, cooled, and powered to sit next to GPUs. This approach is gaining traction, as evidenced by initiatives like NVIDIA’s NVQLink, designed to directly couple quantum processors with existing hardware. Diraq is a launch partner for NVQLink, and is actively collaborating with Dell on the necessary systems engineering to facilitate this integration. For the same computational output, estimates of total power required range from around 100 kW for a rack-scale system to hundreds of megawatts for a warehouse-scale system, but a rack-scale approach represents a considerably easier commitment for data center operators. “The only way is hybrid,” and proactive planning, including assumptions about form factor, cryogenic requirements, and reserved capital expenditure, will be crucial for capturing the benefits of quantum computing as it matures.

The value of the quantum slice is realized by how cleanly it integrates with the classical stack around it.

Power and Cooling Demands of Quantum Architectures

Francesca Elliott, a thought leader examining the intersection of quantum computing and data center infrastructure, highlights a looming challenge beyond the current demands of artificial intelligence. The industry is largely unprepared for the infrastructural changes quantum architectures may necessitate, a situation reminiscent of past industrial transformations. Drawing a parallel to the 19th century, she explains that early factories were self-contained power sources, but the arrival of the electric grid demanded a complete redesign of facilities, a process that took nearly 40 years to fully realize. “The productivity boom of the 1920s was the payoff from factories that were built natively for the new paradigm, not the ones that were retrofitted from the old one,” she notes, emphasizing the importance of proactive infrastructure planning. The critical distinction, according to Elliott, lies not in the technology itself, but in the extent to which it requires rebuilding the surrounding infrastructure.

Unlike GPUs, which could be slotted into existing rack architectures, quantum computers present a more fundamental challenge. Diraq is actively addressing this by working with Dell on systems engineering, aiming to create quantum processors that integrate easily into existing data center infrastructure.

Every time industry has been through infrastructure transitions, the lesson has been the same: the winners are the ones whose infrastructure was ready for the shift when it arrived.

Francesca Elliott

Data Center Planning for Near-Term Quantum Computing

The impending arrival of quantum computing presents a unique challenge to data center operators, demanding foresight beyond the recent, substantial infrastructure investments made to accommodate artificial intelligence. The power drawn by a single rack has climbed from 10 kW to more than 50 kW, with the most aggressive hyperscale designs quoting over 100 kW. Quantum systems introduce a new layer of complexity, potentially requiring orders of magnitude more power depending on the chosen architecture. Most visions of utility-scale quantum computers involve room-sized or warehouse-sized systems similar to the ENIAC of the 1940s. But an alternative is emerging: quantum processors designed from the outset as rack-mounted accelerators that are sized, cooled, and powered to sit next to GPUs.

Stay current. See today’s quantum computing news on Quantum Zeitgeist for the latest breakthroughs in qubits, hardware, algorithms, and industry deals.
Dr. Donovan

Latest Posts by Dr. Donovan: