A new set of tools for managing errors in near-term quantum computing has been developed through a co-design approach to error mitigation and detection. Rohan S. Kumar and colleagues at Yale University, in collaboration with the USA 2Yale Quantum Institute, show that combining Quantum Error Detection (QED) and Probabilistic Error Cancellation (PEC) requires careful architectural design to achieve performance gains. The research reveals that optimising the frequency of QED cycles and implementing a novel characterisation protocol, termed steady-state extraction, is key to avoid accuracy degradation and unlock substantial improvements. Specifically, this co-designed system achieves between two and eleven times lower absolute error, and up to thirty-one times lower mean squared error, when running Quantum Approximate Optimisation Algorithm (QAOA) on Iceberg codes compared to PEC alone on physical qubits, representing a strong step towards practical quantum computation.
Steady-state extraction unlocks performance gains from combined quantum error mitigation techniques
A thirty-one-fold reduction in mean squared error represents a substantial leap in the accuracy of near-term quantum computations by integrating Quantum Error Detection and Probabilistic Error Cancellation. Previously, combining these techniques resulted in diminished performance, but a new “steady-state extraction” protocol now isolates stable error behaviour, enabling strong gains. This co-designed architecture addresses the individual limitations of each technique, delivering a two to eleven-fold decrease in absolute error on physical qubits and unlocking improvements previously unattainable. The significance of this work lies in its potential to accelerate the development of fault-tolerant quantum computers, even with the limitations of current hardware. Near-term quantum devices are inherently noisy, and effective error management is crucial for running algorithms of practical interest. QED and PEC, while individually promising, present complementary challenges; this research demonstrates a pathway to overcome these when used in conjunction.
Realising these benefits requires optimisation of the frequency of Quantum Error Detection cycles, with high-rate Iceberg codes proving particularly effective. In particular, high-rate Iceberg codes demonstrated break-even overhead at just two logical layers per QED cycle, a striking improvement over the surface code which requires more physical qubits. The surface code, a leading candidate for fault-tolerant quantum computation, typically demands significantly more physical qubits to encode a single logical qubit, increasing hardware requirements. Iceberg codes, a more recent development, offer a potentially more resource-efficient alternative, particularly in the near-term. Simulations utilising depolarizing noise with a two-qubit gate error rate of 10−3 revealed that the minimum QED interval did not yield advantages for any code tested, stressing the need for optimisation. Depolarizing noise is a common model for quantum errors, representing the loss of quantum information due to environmental interactions. The gate error rate of 10−3 indicates that, on average, one in a thousand two-qubit gates will introduce an error. This highlights the sensitivity of the combined QED-PEC system to the timing of error detection and the importance of careful calibration.
Error-detected qubits exhibited a separation of timescales in their error evolution, with fast modes dominating the initial QED cycle and impacting the accuracy of noise modelling. This means that errors occur at different rates, and the fastest errors are most prominent immediately after error detection. This phenomenon necessitated the development of a “steady-state extraction” protocol to reduce estimation bias by up to 10.2 times. Estimation bias arises when the error model used to correct errors does not accurately reflect the true error behaviour of the quantum device. The steady-state extraction protocol works by allowing the system to settle into a stable error state before characterising the errors, thereby reducing the influence of transient errors and improving the accuracy of the error model. This protocol involves carefully measuring the error rates over extended periods, discarding initial data points where the error behaviour is still evolving. While successfully combining Quantum Error Detection and Probabilistic Error Cancellation offers a pathway to more reliable near-term quantum computations, this progress isn’t without its caveats. Initial attempts to integrate these techniques counterintuitively worsened accuracy, demanding a system-level redesign and highlighting a key tension between simply adding tools and truly co-designing an architecture.
The initial reduction in accuracy through naive integration should not overshadow the potential benefits demonstrated when properly addressed. A co-designed characterisation method, the ‘steady-state extraction’ protocol, was developed to refine the error modelling process and reduce estimation bias by over tenfold. The initial failure stemmed from the fact that QED, while reducing the probability of undetected errors, introduces its own measurement-induced noise. PEC, attempting to correct errors based on an inaccurate error model due to this noise, actually amplified the overall error rate. The steady-state extraction protocol addresses this by providing a more accurate error model, allowing PEC to function effectively. Quantum Error Detection and Probabilistic Error Cancellation offer a pathway to improve quantum computation, but initially reduced accuracy before this refinement.
Steady-state extraction, achieved through a refined error modelling process, delivered substantial gains in error reduction and will begin to unlock more complex quantum algorithms. The ability to accurately model and mitigate errors is crucial for running increasingly complex quantum algorithms, such as those used in materials science, drug discovery, and financial modelling. The reduction in both absolute and mean squared error demonstrates the effectiveness of the co-designed approach. Absolute error measures the overall magnitude of the error, while mean squared error provides a measure of the average squared difference between the predicted and actual results, giving more weight to larger errors. Combining Quantum Error Detection and Probabilistic Error Cancellation represents a major refinement in managing errors within near-term quantum processors. Crucially, a co-designed architecture, rather than simple integration, is essential to unlock performance gains from these two techniques, with optimising the timing of error detection cycles being particularly important. A new ‘steady-state extraction’ protocol was developed to isolate stable error behaviour, addressing a previously unrecognised source of inaccuracy stemming from initial measurement disturbances. Future work will focus on extending this co-design approach to other error mitigation and detection techniques, and on exploring the scalability of the system to larger quantum processors.
The research demonstrated that combining Quantum Error Detection and Probabilistic Error Cancellation can improve the accuracy of quantum computations. Initially, simply integrating these techniques degraded performance, but a new protocol called ‘steady-state extraction’ resolved this by creating a more accurate error model. This allowed Probabilistic Error Cancellation to function effectively alongside Quantum Error Detection, reducing undetected errors in a
Iceberg code running QAOA. The authors intend to extend this co-design approach to other error mitigation techniques and larger quantum processors.
👉 More information
🗞 Co-Designing Error Mitigation and Error Detection for Logical Qubits
🧠 ArXiv: https://arxiv.org/abs/2604.19871
