IBM Pioneers Quantum-Centric Supercomputing: CPUs, GPUs & QPUs Unite for New Era

IBM is pioneering a new era of “quantum-centric supercomputing” by uniting CPUs, GPUs, and quantum processing units (QPUs) – a development poised to shatter the limitations of today’s most powerful computers. As of January 29, 2026, researchers are demonstrating this hybrid approach, showcasing how advanced GPUs working alongside QPUs can accelerate workflows and boost the fidelity of quantum computations. This isn’t about replacing existing technology, but rather integrating it; IBM envisions quantum as “one piece of a paradigm combining every computing tool we have available to solve problems beyond anything that’s possible today.” Advances from partners at Oak Ridge National Laboratory, AMD, and RIKEN are now bringing this future to life, unlocking performance and accuracy previously unattainable.

CPUs, GPUs, and QPUs: Unique Architectures & Strengths

CPUs, GPUs, and QPUs represent a convergence of computational architectures, each possessing distinct strengths essential for tackling increasingly complex problems. While central processing units (CPUs) remain the foundational workhorse, adept at sequentially executing instructions and orchestrating complex workloads, they are increasingly complemented by the parallel processing capabilities of graphics processing units (GPUs). GPUs excel at performing numerous simpler operations simultaneously, leveraging thousands of threads to accelerate calculations involving tensors – multidimensional data structures crucial in modern computing. A one-dimensional tensor, “a vector, features numbers in a column,” while more complex tensors build into representations of data spanning multiple dimensions, even extending to “many files each with many spreadsheets” in the case of 4D tensors. However, quantum processing units (QPUs) introduce a fundamentally different paradigm, storing information within the states of a quantum system. QPUs unlock mathematical operations inaccessible to classical processors, operating through quantum circuits represented as sequences of matrices. Critically, “every quantum circuit can be represented into a sequence of mathematical operations using matrices that must follow a set of rules.” Performing these operations efficiently on a QPU circumvents the exponential space requirements that would cripple classical GPUs; “an operation on a 50-qubit circuit might be represented by matrices with up to 2^50 entries to accurately simulate—far beyond the abilities of any GPU.” This inherent capability positions QPUs not as replacements for CPUs and GPUs, but as synergistic partners. The emergence of quantum-centric supercomputing (QCSC) embodies this integration. Recent advances, demonstrated by collaborations between IBM, Oak Ridge National Laboratory, AMD, RIKEN, and Algorithmiq, reveal how state-of-the-art GPUs, alongside QPUs, can accelerate workflows and enhance quantum computation fidelity. New algorithms, such as sample-based quantum diagonalization (SQD) techniques, are designed to exploit this synergy. SQD, for instance, leverages a quantum computer to generate a shortlist of configurations, which are then processed by a classical computer to create a simpler tensor for analysis, iteratively refining the results. This interplay is proving remarkably effective; researchers “measured speedups running SQD on Frontier on the order of 100x compared to the CPU base case.”

Furthermore, GPUs are being employed to improve the accuracy of quantum computations themselves, through tensor-based error mitigation techniques. Algorithmiq’s work demonstrates how “tensors…create a noise model and then inverts the model to remove the noise from the output of the quantum circuit,” allowing researchers to extract meaningful results from larger, more complex problems. This signifies a future where classical and quantum resources are not simply combined, but deeply interwoven, orchestrating computations beyond the reach of any single architecture. “Our vision for quantum-centric supercomputing incorporates classical computing hardware throughout the computation,” ultimately demanding orchestration between diverse compute resources.

Sample-Based Quantum Diagonalization Accelerates Simulations

Researchers are achieving significant gains in computational speed and accuracy by integrating quantum processing with established classical methods, specifically through a technique called sample-based quantum diagonalization (SQD). This approach isn’t about replacing conventional supercomputers, but augmenting them, allowing scientists to tackle simulations previously considered intractable. New work from partners at AMD, Oak Ridge National Laboratory, and RIKEN demonstrates SQD implemented across IBM QPUs and supercomputing clusters, offering “a first look at our vision for the future of computing.”

SQD addresses a core challenge in fields like chemistry and materials science: accurately simulating the behavior of complex systems. Describing these systems relies on equations called Hamiltonians, but extracting meaningful data – such as energy levels – demands immense computational resources. Even the world’s most powerful supercomputers can only approximate solutions due to the sheer size of the required tensors. SQD offers a pathway to improved approximations by leveraging the unique capabilities of quantum computers. The process begins by encoding the Hamiltonian into a quantum circuit, running it on a QPU, and generating a shortlist of configurations for further study. This information is then passed to a classical computer, which uses it to create a simplified tensor and diagonalize it, ultimately extracting physical information about the system. The interplay between quantum circuits and classical tensors is key to SQD’s efficiency. A QPU can handle quantum circuits that would overwhelm a GPU with their computational demands, while CPUs and GPUs excel at the parallel processing of simpler tensors and orchestrating the overall workflow. Recent experiments have yielded impressive results. Further optimization with the Thrust library and the Miyabi supercluster using NVIDIA GH200 GPUs delivered “another 20% performance improvement beyond OpenMP alone.”

Beyond speed, this hybrid approach is also improving the reliability of quantum computations. New error-mitigation techniques utilize tensor-based models to undo the effects of noise inherent in quantum processors, allowing researchers to “extract meaningful results for problems larger than those classical computing alone can verify.” This is exemplified by work from Algorithmiq, Trinity College Dublin, and IBM, which employs these techniques with dual unitary circuits to study chaotic quantum many-body systems, now available as a “Qiskit Function.” The convergence of tensor calculations and quantum circuits is not merely a technological advancement; it’s a fundamental shift in how we approach complex problem-solving.

In reality, quantum will be one piece of a paradigm combining every computing tool we have available to solve problems beyond anything that’s possible today.

AMD/NVIDIA GPUs Boost SQD Performance on IBM QPU

Recent breakthroughs are demonstrating the power of tightly integrated classical and quantum computing, with significant performance gains achieved by pairing IBM Quantum Processing Units (QPUs) with advanced graphics processing units from AMD and NVIDIA. Researchers are moving beyond theoretical models, actively combining CPUs, GPUs, and QPUs in what IBM terms “quantum-centric supercomputing.” A key area of progress lies in sample-based quantum diagonalization (SQD) techniques, which promise to refine simulations in fields like chemistry and materials science. New work at Oak Ridge National Laboratory, alongside collaborations with RIKEN and Algorithmiq, is bringing these hybrid approaches to life.

The integration isn’t simply about adding more processing power; it’s about leveraging the unique strengths of each architecture. CPUs excel at orchestrating complex workloads, while GPUs are optimized for the rapid, parallel calculations involving tensors – multidimensional data structures crucial for many scientific simulations. QPUs, meanwhile, access mathematical operations inaccessible to classical processors, operating on information stored in quantum states. A challenge arises when simulating quantum circuits, as representing even a modest 50-qubit circuit classically requires matrices with up to 2^50 entries, exceeding GPU capacity. Recent experiments implementing SQD on IBM QPUs and the Frontier supercomputer yielded impressive results.

Further gains—between 1.8x and 3x—were achieved by incorporating the latest AMD MI300X, MI355X GPUs, or NVIDIA H100, and GB200 GPUs. The team, comprised of researchers from IBM, Oak Ridge National Laboratory, and AMD, utilized the OpenMP API for shared-memory multi-processing programming to achieve these speeds. Beyond SQD, GPUs are proving valuable in error mitigation, a critical step in harnessing the potential of noisy quantum processors. Algorithmiq, Trinity College Dublin, and IBM researchers have developed techniques employing tensors to model and remove noise from quantum circuit outputs.

Rather than relying on any single architecture, these hybrid approaches demonstrate how tightly integrated CPUs, GPUs, and QPUs together unlock performance and accuracy beyond what any one of them can achieve on their own.

Tensor-Based Error Mitigation Improves Quantum Results

Quantum computations, while promising unprecedented processing power, are inherently susceptible to errors. Researchers are now leveraging the strengths of tensor-based methods, coupled with advanced GPUs, to dramatically improve the fidelity of results obtained from quantum processing units (QPUs). This isn’t simply about brute-force error correction; it’s about intelligently mitigating noise after the quantum calculation, using classical resources to refine the output. New error-mitigation techniques feature a circuit run on a noisy quantum processor, and then employ a tensor-based model to undo the effects of the noise.

A key advancement lies in the development of algorithms that utilize both quantum circuits and tensor representations. “This class of circuits allows researchers to simulate systems that are chaotic but which also have exact verifiable solutions,” making them ideal for benchmarking. This year, we expect a variety of tensor-based error mitigation techniques to aid our users in running accurate quantum computations—further accelerated with the help of GPUs. The synergy between QPUs and tensor-based processing is further exemplified by recent work demonstrating a time crystal on an IBM quantum processor.

This complex system, spanning 144 qubits, was created in collaboration with Basque Quantum, NIST, and IBM researchers. Time crystals, oscillating systems resistant to external perturbation, are valuable for both materials science and quantum information research. The team didn’t just rely on the quantum result, however; they “tested the quantum results against best-available tensor network methods and used these methods to help improve the quantum execution.” This demonstrates how tensor networks can validate and refine quantum outputs, a process poised for acceleration with the incorporation of GPUs.

Beyond validation, tensor methods are also proving crucial in enhancing the efficiency of complex algorithms like sample-based quantum diagonalization (SQD). This collaborative effort highlights the potential for a truly quantum-centric supercomputing paradigm, where diverse computational resources work in concert to tackle previously intractable problems.

This work allows us to extract meaningful results for problems larger than those classical computing alone can verify, using quantum circuits to run the calculation and tensors to clean it up.

Quantum News

Quantum News

As the Official Quantum Dog (or hound) by role is to dig out the latest nuggets of quantum goodness. There is so much happening right now in the field of technology, whether AI or the march of robots. But Quantum occupies a special space. Quite literally a special space. A Hilbert space infact, haha! Here I try to provide some of the news that might be considered breaking news in the Quantum Computing space.

Latest Posts by Quantum News:

SuperQ Quantum Announces Post-Quantum Cybersecurity Progress at Qubits 2026, January 29, 2026

SuperQ Quantum Announces Post-Quantum Cybersecurity Progress at Qubits 2026

January 29, 2026
$15.1B Pentagon Cyber Budget Driven by Quantum Threat

$15.1B Pentagon Cyber Budget Driven by Quantum Threat

January 29, 2026
University of Missouri Study: AI/Machine Learning Improves Cardiac Risk Prediction Accuracy

University of Missouri Study: AI/Machine Learning Improves Cardiac Risk Prediction Accuracy

January 29, 2026