Researchers are tackling the critical challenge of real-time decoding for quantum error correction, a necessity for building practical, fault-tolerant quantum computers. Samuel Stein, Shuwen Kan, and Chenxu Liu, from Pacific Northwest National Laboratory and Fordham University, alongside et al., present a novel neural decoder framework that significantly reduces logical error rates and decoding latency. Their work exploits the differing timescales of hardware calibration and error correction, decoupling device statistics from rapid syndrome decoding via a technique called feature-wise linear modulation (FiLM). This innovative approach, tested on IBM quantum processors with up to d = 11 repetition codes and over 2.7 million experimental shots, demonstrates an impressive 11.1x reduction in logical error rate compared to existing methods , and crucially, generalises to new hardware and calibration data without retraining, paving the way for adaptive and efficient quantum computation.
Scientists are investigating real-time decoding of quantum error correction (QEC) as essential for enabling fault-tolerant quantum computation. A practical decoder must operate with high accuracy at low latency, whilst remaining robust to spatial and temporal variations in hardware noise. Researchers introduce a hardware-conditioned neural decoder framework designed to exploit the natural separation of timescales in superconducting processors. Calibration drifts occur over hours, while error correction requires microsecond-scale responses. By processing calibration data through a graph-based encoder, the framework learns to predict and correct for these drifts.
FiLM Decoding Generalises Across IBM Quantum Processors
Scientists decoupled the processing of device statistics from low-latency syndrome decoding via Feature-wise Linear Modulation (FiLM) conditioning a lightweight convolutional backbone. They evaluated this approach using the 1D repetition code on IBM Fez, Kingston, and Pittsburgh processors, collecting over 2.7 million experimental shots spanning distances up to d = . A single trained model generalized to unseen qubit chains and new calibration data acquired days later without retraining. On these unseen experiments, the FiLM-conditioned decoder achieved up to a 11.1× reduction in logical error0.748% logical error rate (LER) against the MWPM-Correlated 3.597% at d = 5.
Graph-neural decoders trained on detector graphs have surpassed matching under circuit-level noise in simulation and reached parity with MWPM on experimental repetition-code data. These results are compelling, though many learned decoders are trained for a single fixed device/noise distribution and require retraining or fine-tuning, and maintaining low latency. g., T1/T2, readout assignment errors, gate error rates) and produces a latent conditioning vector0.1×. Experiments revealed that by decoupling the processing of slowly varying hardware characteristics from the fast-paced syndrome decoding, they could achieve substantial improvements in logical error rates. The team measured performance using the 1D repetition code on IBM Fez, Kingston, and Pittsburgh processors, collecting over 2.7 million experimental shots spanning code distances up to d = 11.
Results demonstrate that a single trained model generalizes effectively to unseen qubit chains and calibration data acquired days later without requiring retraining, a remarkable feat of adaptability. Specifically, the FiLM-conditioned decoder achieved up to an 11.1x reduction in logical error rate compared to modified minimum-weight perfect matching (MWPM), showcasing a substantial leap in decoding efficiency. This improvement was observed on previously unseen experimental data, confirming the model’s robust generalization capabilities. Data shows the decoder leverages Feature-wise Linear Modulation (FiLM) to apply channel-wise transformations to intermediate features based on calibration data.
By employing a graph-neural network to encode per-qubit calibration features, including T1/T2 coherence times, readout assignment errors, and gate error rates, the system generates a latent conditioning vector. This vector modulates a lightweight convolutional backbone, allowing the decoder to adapt to spatial and temporal noise variations without increasing latency. The work utilized 400 calibration snapshots, and the model maintained high performance even when evaluated on new contiguous sets of physical qubits with calibration data acquired a week after training. Measurements confirm a performance crossover at (d, r) ≈ (7, 5) for the Z-basis, beyond which the FiLM-conditioned decoder consistently outperforms baseline methods.
At the largest code size tested (d = 11, r = 11), the decoder delivered an 11.1x improvement in logical error rate relative to hardware-informed MWPM. Furthermore, the team recorded a 7.41x reduction in logical error rate compared to networks without FiLM and traditional MWPM decoders, highlighting the effectiveness of the proposed architecture. This breakthrough delivers a promising, adaptive performance with negligible latency overhead, paving the way for scalable QEC systems.
Neural Decoding Boosts Quantum Error Correction by improving
Scientists have developed a hardware-conditioned neural decoder framework to improve fault tolerance in superconducting quantum processors. This new approach exploits the differing timescales of calibration drifts and error correction, enabling high-accuracy, low-latency decoding despite variations in hardware noise. By employing a graph-based encoder and feature-wise linear modulation (FiLM) to condition a convolutional neural network, the researchers effectively separate the processing of device statistics from the critical, time-sensitive syndrome decoding process. Evaluations using the 1D repetition code on IBM quantum processors, Fez, Kingston, and Pittsburgh, involved over 2.7 million experimental shots, demonstrating significant improvements in logical error rates.
Specifically, the FiLM-conditioned decoder achieved up to an 11.1x reduction in logical error rate compared to modified minimum-weight perfect matching, particularly for larger distances where error correlations are more prominent. Importantly, a single trained model successfully generalised to unseen qubit chains and calibration data collected days later, indicating the decoder’s ability to learn transferable representations of hardware noise. The. This work represents a promising step towards adaptive, real-time quantum error correction with minimal latency overhead.
👉 More information
🗞 Calibration-Conditioned FiLM Decoders for Low-Latency Decoding of Quantum Error Correction Evaluated on IBM Repetition-Code Experiments
🧠 ArXiv: https://arxiv.org/abs/2601.16123
