Sandia Lab Uses Machine Learning to Improve Quantum Computing

Daniel Hothem, Timothy Proctor, and their Sandia team have developed a machine-learning method to improve the efficiency of quantum computing systems. This approach focuses on modeling quantum computer behavior to better understand system capabilities and limitations, much like diagnosing issues within a complex system without direct access. The team aims to predict potential failures caused by physical errors—the predominant source of quantum computer malfunctions—and thereby accelerate development across the Department of Energy enterprise. By learning from program successes and failures, this method seeks to bridge the gap between theoretical potential and the performance of current quantum systems.

Understanding Quantum Computer Errors

Quantum computers are prone to errors that corrupt calculations, creating a significant gap between their theoretical potential and actual performance. These “physical errors” are a primary cause of failure, making it difficult and expensive to improve the systems. Unlike conventional computers, a quantum computer’s internal workings aren’t easily examined during a calculation, hindering the identification and correction of these errors. A predictive tool to identify likely errors is crucial for accelerating development.

The Sandia team is building a machine-learning model, likened to assessing an old jukebox, to predict these failures. By analyzing digital “snapshots” of quantum programs and their success/failure rates, the model infers likely physical errors. This approach avoids directly examining the complex quantum computer itself. The models learn to identify “scratches” and assess internal components, similar to diagnosing a jukebox, and predict how often a program will succeed.

This modeling technique is designed to scale efficiently as quantum computers become more complex. Traditional error-analysis methods struggle with increasing complexity, but Sandia’s models don’t face the same limitations. This scalability will benefit programmers, engineers, and researchers, helping them improve devices, understand program limitations, and focus on fruitful research directions—ultimately reducing the cost and time of developing next-generation quantum systems.

Modeling Quantum Systems with Machine Learning

Sandia researchers are developing machine learning models to improve the efficiency of quantum computing by predicting failures. These models function similarly to diagnosing an old jukebox – identifying potential errors before running a program. The team utilizes neural networks to analyze “digital snapshots” of quantum programs, predicting which physical errors will occur during computation and translating them into a formula estimating success rates. This approach aims to understand and mitigate errors without directly accessing the quantum computer’s internal workings.

The key innovation lies in the models’ ability to scale with increasing quantum computer complexity. Traditional error analysis methods become impractical as systems grow, but Sandia’s approach avoids this limitation. By focusing on the most important errors and learning from data – “photos” of successful and failed programs – the models remain manageable in complexity while maintaining accuracy. This scalability is crucial for future-proofing quantum computing development.

These models offer benefits across the quantum computing ecosystem. Programmers can use them to quickly identify and correct errors, while engineers can improve device design. Researchers in other fields, such as chemistry, can assess whether existing quantum systems are capable of solving specific problems. Ultimately, the goal is to streamline research and accelerate the application of quantum computers to national security challenges by avoiding “non-fruitful research directions.”

The Jukebox Analogy for Quantum Behavior

The Sandia team developed a method to improve quantum computing efficiency by drawing a parallel to troubleshooting an old jukebox. Just as one might test a few records to predict if the machine will play correctly without opening it up, the team builds models to predict quantum computer failures. These models learn from the successes and failures of quantum programs, identifying likely physical errors before a calculation runs—allowing researchers to avoid mistakes and accelerate development.

The core of this approach utilizes neural networks to process “snapshots” of quantum programs. These models analyze the data to predict which physical errors will occur during a computation. By training the models on data from both successful and failed programs, the team avoids needing extensive access to the quantum computers themselves. The models infer information about potential errors, much like spotting “scratches” on a record, to predict program success rates.

This method’s “hyper-efficiency” is key because it doesn’t become significantly more complex as quantum computers grow larger. Traditional error analysis methods struggle with scaling, but Sandia’s models maintain a manageable complexity. This allows programmers and engineers to quickly understand errors, improve devices, and determine which problems existing quantum systems can realistically solve, ultimately streamlining research and reducing costs.

Scaling Models for Future Quantum Computing

Sandia researchers are developing machine learning models to improve quantum computing by predicting errors. These models aim to understand how physical errors corrupt calculations in real-world quantum computers, which currently experience frequent failures. The team likens the process to diagnosing problems in an old jukebox – identifying issues like scratched records or faulty wiring – without being able to open the device to inspect it directly.

The team utilizes neural networks that process data from quantum programs, predicting which errors will occur during computation. By training these models on data from both successful and failed programs, researchers can infer likely errors from digital “snapshots,” similar to identifying imperfections on a record. This allows them to predict the success rate of a program without running it on a physical quantum computer, streamlining the research process.

A key benefit of this approach is its scalability. Unlike traditional error analysis methods, the Sandia models don’t become significantly more complex as quantum computers grow in size. This allows programmers and engineers to better understand and improve devices, and helps researchers determine which problems current quantum systems can realistically solve, ultimately accelerating development and reducing wasted research efforts.

We’re building models that allow scientists to really understand a quantum computer so they can either make it better or understand what problems it can solve.

Daniel Hothem
Quantum News

Quantum News

As the Official Quantum Dog (or hound) by role is to dig out the latest nuggets of quantum goodness. There is so much happening right now in the field of technology, whether AI or the march of robots. But Quantum occupies a special space. Quite literally a special space. A Hilbert space infact, haha! Here I try to provide some of the news that might be considered breaking news in the Quantum Computing space.

Latest Posts by Quantum News:

Toyota & ORCA Achieve 80% Compute Time Reduction Using Quantum Reservoir Computing

Toyota & ORCA Achieve 80% Compute Time Reduction Using Quantum Reservoir Computing

January 14, 2026
GlobalFoundries Acquires Synopsys’ Processor IP to Accelerate Physical AI

GlobalFoundries Acquires Synopsys’ Processor IP to Accelerate Physical AI

January 14, 2026
Fujitsu & Toyota Systems Accelerate Automotive Design 20x with Quantum-Inspired AI

Fujitsu & Toyota Systems Accelerate Automotive Design 20x with Quantum-Inspired AI

January 14, 2026