Princeton Team Develops Error Correction Method for Quantum Computers, Boosting Efficiency Tenfold

Princeton Team Develops Error Correction Method For Quantum Computers, Boosting Efficiency Tenfold

Researchers led by Jeff Thompson at Princeton University have developed a method to locate errors in quantum computers, making them easier to correct and accelerating progress toward large-scale quantum computing. There is a great deal of interest in Error Correction techniques to enable researchers to get closer to practical quantum computers. The team, including Shruti Puri at Yale University and Guido Pupillo at the University of Strasbourg, demonstrated a way to identify errors in quantum computers more efficiently. The research, published in Nature, focuses on a quantum computer based on neutral atoms. The team found a way to characterize error rates without destroying the qubits, allowing them to detect errors in real-time. This could reduce the computational costs of implementing error correction significantly.

“Not all errors are created equal,”

Jeff Thompson, associate professor of electrical and computer engineering at Princeton University.

Quantum Computing Error Detection Breakthrough

Researchers have developed a method that can identify the location of errors in quantum computers, making them up to ten times easier to correct. This development could significantly speed up progress towards large-scale quantum computers capable of solving complex computational problems. The team, led by Jeff Thompson from Princeton University, has demonstrated a way to identify when errors occur in quantum computers more efficiently than ever before. This is a new direction for research into quantum computing hardware, which more often seeks to lower the probability of an error occurring in the first place.

The Importance of Error Detection in Quantum Computing

Quantum computers are built around qubits, the core component of these machines. Physicists have been inventing new qubits and improving them to be less fragile and less prone to error for nearly three decades. However, some errors are inevitable no matter how good qubits get. The central obstacle to the future development of quantum computers is being able to correct for these errors. To correct an error, you first have to figure out if an error occurred, and where it is in the data. Typically, the process of checking for errors introduces more errors, which have to be found again, and so on.

A New Approach to Error Detection

Thompson’s lab works on a quantum computer based on neutral atoms. In this work, a team led by graduate student Shuo Ma used an array of 10 qubits to characterize the probability of errors occurring while first manipulating each qubit in isolation then manipulating pairs of qubits together. They found error rates near the state of the art for a system of this kind: 0.1 percent per operation for single qubits and 2 percent per operation for pairs of qubits.

Real-Time Error Detection

The main result of the study is not only the low error rates, but also a different way to characterize them without destroying the qubits. By using a different set of energy levels within the atom to store the qubit, compared to previous work, the researchers were able to monitor the qubits during the computation to detect the occurrence of errors in real time. This measurement causes the qubits with errors to emit a flash of light, while the qubits without errors remain dark and are unaffected.

The Future of Quantum Computing

The researchers believe that, with the new approach, close to 98 percent of all errors should be detectable with optimised protocols. This could reduce the computational costs of implementing error correction by an order of magnitude or more. Other groups have already started to adapt this new error detection architecture. Researchers at Amazon Web Services and a separate group at Yale have independently shown how this new paradigm can also improve systems using superconducting qubits. This new approach to error detection can be used in many different qubits and computer architectures, making it a flexible solution that can be combined with other developments.

“We need advances in many different areas to enable useful, large-scale quantum computing. One of the challenges of systems engineering is that these advances that you come up with don’t always add up constructively. They can pull you in different directions,” Thompson said. “What’s nice about erasure conversion is that it can be used in many different qubits and computer architectures, so it can be deployed flexibly in combination with other developments.”

Summary

Researchers have developed a method to identify and correct errors in quantum computers, potentially making them ten times easier to correct and accelerating progress towards large-scale quantum computing. The new approach allows for real-time detection of errors, which are converted into a type of error known as an erasure error, simpler to correct than errors in unknown locations, and could reduce the computational costs of implementing error correction significantly.

  • A team of researchers led by Jeff Thompson from Princeton University has developed a method to identify errors in quantum computers, making them easier to correct and accelerating progress towards large-scale quantum computing.
  • The team’s approach, which is a new direction in quantum computing research, was published in Nature on Oct. 11. Collaborators include Shruti Puri at Yale University and Guido Pupillo at the University of Strasbourg.
  • The researchers’ work focuses on a type of quantum computer based on neutral atoms, specifically ytterbium atoms held in place by laser beams. They used an array of 10 qubits to characterise the probability of errors occurring.
  • The main result of the study is a different way to characterise errors without destroying the qubits. The researchers were able to monitor the qubits during computation to detect errors in real time. Errors cause the qubits to emit a flash of light, while qubits without errors remain dark.
  • This process converts the errors into a type of error known as an erasure error, which is simpler to correct than errors in unknown locations. This is the first time the erasure-error model has been applied to matter-based qubits.
  • The researchers believe that with this new approach, close to 98 percent of all errors should be detectable with optimised protocols, reducing the computational costs of implementing error correction significantly.
  • Other groups, including researchers at Amazon Web Services and Yale, have started to adapt this new error detection architecture.