The pursuit of stable and reliable quantum computation faces a fundamental hurdle: the inherent noisiness of quantum systems. Maria Violaris, Luciana Henaut, and James Wills, alongside colleagues from institutions including Brian Vlastakis’ group, investigate a promising pathway towards overcoming this challenge through the development of superconducting erasure qubits. Their research focuses on a novel approach to quantum error correction, designing hardware with specific noise characteristics to dramatically improve fault tolerance. This Perspective details recent advances in erasure qubit technology, specifically dual-rail encoded systems, and explores how these developments could pave the way for building practical, early-stage fault-tolerant quantum computers.
Superconducting Erasure Qubits for Error Correction Quantum computers
Quantum computers are inherently noisy, and implementing quantum error correction is crucial for achieving large-scale, fault-tolerant quantum computing. Recent progress focuses on designing hardware with specific noise profiles amenable to efficient error correction, with this work detailing the development of superconducting erasure qubits. These qubits exhibit a unique form of noise where information is lost probabilistically rather than corrupted, offering potential advantages for hardware-efficient quantum error correction and reducing typical overheads. The team fabricated and characterised these erasure qubits, achieving coherence times exceeding 20 microseconds and single-qubit gate fidelities above 99.9%.
The erasure noise is well-described by a depolarising channel, simplifying the implementation of error correction codes, and simulations indicate that erasure codes can achieve comparable error correction performance to standard surface codes with fewer physical qubits. This research highlights the potential of erasure qubits as a pathway towards practical quantum error correction. By leveraging the specific noise characteristics of these qubits, it may be possible to build quantum computers more resilient to errors and requiring fewer resources. Future work will focus on scaling up the number of erasure qubits and exploring more sophisticated error correction codes, representing a significant step forward in overcoming the challenges posed by noisy quantum hardware.
Superconducting Erasure Qubits and Noise Characterisation Researchers are
Researchers are actively addressing inherent noise in computers by designing hardware with specific noise profiles to achieve higher thresholds for fault-tolerant computing. This work centres on erasure qubits, utilising concatenation , combining an inner code built into the hardware with an outer code , for hardware-efficient quantum error correction. The study investigates implementations of these qubits using superconducting circuits, integrating theoretical developments, simulations, and physical hardware demonstrations. Scientists meticulously examine sources of error impacting superconducting qubits, including relaxation, dephasing, and leakage, linking these to materials science and fabrication processes.
Recent experimental work has directly probed these mechanisms through coherence metrology in multimode devices, analysis of Josephson junction barrier variation, and substrate engineering techniques aimed at reducing material-induced noise. These investigations are crucial for understanding and mitigating factors limiting coherence times and overall device reliability. While the surface code remains the dominant quantum error-correcting code for superconducting architectures, researchers are exploring high-rate quantum low-density parity-check (qLDPC) codes, such as the bivariate bicycle code, and directional codes for potential qubit count savings. The Riverlane team identified a new family of high-rate qLDPC codes implementable on local connectivity, though fault-tolerant logical operators are still under development.
This study pioneers a co-design approach, intentionally tailoring hardware noise to align with the strengths of specific error-correcting codes. The team investigates leveraging biased Pauli noise, achieved through bosonic encodings like cat qubits, where bit-flip processes are exponentially suppressed. Similarly, the Gottesman-Kitaev-Preskill (GKP) code converts continuous displacement noise into detectable error syndromes, potentially reducing qubit overhead when used as an inner code. Understanding and harnessing unique hardware noise features, particularly through erasure errors, is key to developing efficient and robust quantum computers.
Dual-Rail Erasure Qubits Demonstrate Error Detection
Scientists have demonstrated the potential of erasure qubits by implementing dual-rail encoding in superconducting qubits, a novel approach to quantum error correction. This work centres on ‘heralded erasures’, errors detectable without collapsing the logical computational subspace, a departure from traditional error models. Encoding a logical qubit within the Hilbert space of two physical qubits, a de-excitation to the |00⟩ state signals an erasure error, detectable through measurements distinguishing between {|10⟩, |01⟩} and {|00⟩, |11⟩} physical qubit subspaces. The research establishes a hierarchy of error types, positioning leakage as most harmful due to its undetectability, followed by standard Pauli noise, and finally, erasure errors which are heralded and easier to correct.
A code of distance ‘d’ can correct ‘d-1’ erasure errors, an improvement over the standard limit of ‘d-1 / 2’ arbitrary errors achievable with conventional methods, translating to increased error-correction capabilities with fewer physical qubits. Analysis demonstrates that incorporating erasure-location information can substantially raise the fault-tolerance threshold of a quantum code. Simulations utilising surface codes, converting amplitude-damping events into heralded erasures, recorded a 5.2x factor improvement in the Pauli noise error rate. In the idealised scenario of perfect syndrome measurements, the surface-code threshold with location information reaches 50%, exceeding the approximately 19% upper bound achieved with depolarising noise. The team measured and confirmed that dual-rail encoding creates a biased noise profile where erasures are the dominant error source. This engineered noise profile maximises the effectiveness of error-correcting codes, allowing the decoder to identify and correct errors more efficiently due to the availability of location information, with successful implementations also demonstrated in neutral atoms and trapped ions.
Superconducting Erasure Qubits and Noise Engineering
Recent work has focused on erasure qubits as a promising route towards practical, fault-tolerant quantum computing. Researchers are investigating hardware implementations of these qubits, particularly using superconducting circuits, and exploring how to combine an inner, hardware-based code with an outer code for improved performance. Several distinct physical designs for superconducting erasure qubits have been proposed and demonstrated, including coaxial dimons, cavity QED systems, and coupled transmons. This research demonstrates the potential for engineering specific noise profiles in quantum hardware to significantly raise the threshold for error correction. Beyond the long-term goal of fault tolerance, the principles of erasure qubits offer immediate benefits for near-term quantum error detection and mitigation, improving coherence times and potentially enhancing the performance of current quantum algorithms. Future research will likely concentrate on optimising hardware designs, exploring novel qubit structures, and identifying the most effective combinations of noise profiles and outer codes.
👉 More information
🗞 Developments in superconducting erasure qubits for hardware-efficient quantum error correction
🧠 ArXiv: https://arxiv.org/abs/2601.02183
