Scientists are increasingly focused on mitigating the effects of parameter drift in superconducting qubits, which occurs on timescales faster than conventional calibration methods can address. Malthe A. Marciniak, Rune T. Birke, and Johann B. Severin, working with colleagues from the Center for Quantum Devices and the NNF Quantum Computing Programme at the Niels Bohr Institute, University of Copenhagen, Denmark, present a novel on-FPGA workflow capable of millisecond-scale calibration and benchmarking. This research is significant because it demonstrates a fully integrated system for pulse generation, data acquisition, analysis, and feed-forward, eliminating delays associated with CPU-based processing. Their implementation of sparse-sampling and on-FPGA inference tools facilitates rapid readout calibration, spectroscopy, pulse-amplitude optimisation, coherence estimation, and benchmarking, ultimately enabling over 74,000 consecutive recalibrations and sustained high-performance gate fidelity through continuous, closed-loop optimisation.
Scientists have developed a calibration and benchmarking workflow for superconducting qubits that operates on millisecond timescales, addressing a critical limitation imposed by rapidly fluctuating qubit parameters. Furthermore, Clifford randomized gate benchmarking, a standard method for assessing gate fidelity, is completed in 107 milliseconds.
Deploying a closed-loop recalibration protocol continuously for six hours enabled over 74,000 consecutive recalibrations, consistently yielding superior gate performance compared to initial calibration settings. By quantifying the trade-off between estimation uncertainty and decision time under sparse sampling, the researchers identified optimal parameter regimes for efficient qubit and pulse parameter estimation. Measurements reveal rapid fluctuations in qubit coherence, with T1 values fluctuating over a two-second window. Furthermore, Clifford randomized gate benchmarking was completed in 107 milliseconds. The research profiles the timing budget of each calibration primitive and quantifies the trade-off between time-to-decision and estimator precision under sparse sampling, identifying optimal parameter regimes for efficient estimation of both T1 and pulse parameters. Recognising that superconducting qubit parameters drift on sub-second timescales, the research focused on developing calibration and benchmarking techniques executable on millisecond timescales to counteract these fluctuations.
Designing On-Chip Low-Latency Calibration Systems
Central to this approach was an on-FPGA workflow, meticulously designed to co-locate pulse generation, data acquisition, analysis, and feed-forward control, thereby eliminating the delays inherent in conventional CPU-based round trips. This contrasts with traditional methods where data is transferred to a central computer for analysis, introducing significant latency.
The resulting low-latency primitives were then deployed for
The resulting low-latency primitives were then deployed for tasks including readout calibration, spectroscopy, pulse-amplitude calibration, coherence estimation, and comprehensive benchmarking. Correlation analysis confirmed that this recalibration effectively suppressed the coupling of gate error to control-parameter drift, while simultaneously preserving performance linked to qubit coherence. For years, researchers have battled parameter drift, the subtle but persistent shifts in calibration that degrade performance over time.
Shifting Paradigms of Quantum Drift Management
This work doesn’t simply offer another incremental improvement in coherence or gate fidelity; it presents a fundamentally different approach to managing this drift, shifting the burden from slow, CPU-bound post-processing to rapid, on-chip recalibration. This is a significant leap forward, enabling continuous recalibration, over 74,000 cycles in a six-hour period, and demonstrably improved gate performance.
Future Work: Towards Truly Resilient Quantum Hardware
However, the reliance on specific signal models, such as exponential and sine-like functions, represents a limitation. Real quantum systems are rarely so neatly described. Ultimately, the goal is not just to correct for drift, but to build systems resilient enough to withstand it.
🗞 Millisecond-Scale Calibration and Benchmarking of Superconducting Qubits
🧠 ArXiv: https://arxiv.org/abs/2602.11912
The underlying physical parameters governing superconducting qubits—specifically the Josephson junction critical current and the coupling element inductances—are highly sensitive to environmental fluctuations, including temperature gradients and stray electromagnetic fields. This sensitivity results in the drift of the qubit Hamiltonian parameters ($\Delta$, $\epsilon$, $g$), which dictates the qubit’s energy levels and coupling strengths. Traditional methods assume an adiabatic evolution of the parameters, which is fundamentally unrealistic in current operational regimes. Addressing this necessitates dynamic monitoring and continuous recalibration of the entire control sequence, treating parameter estimation not as a static measurement, but as a continuous stochastic process requiring real-time filtering and prediction.
The architectural success of the on-FPGA approach stems from its ability to execute computationally intensive estimation routines in the hardware domain, bypassing the inherent I/O bottlenecks of general-purpose CPUs. Specifically, tasks such as fitting measured spectral data to characterize the qubit’s energy spectrum or solving the optimization landscape for pulse amplitude need tremendous computational throughput. By mapping algorithms like least-squares fitting or gradient descent onto dedicated FPGA resources, the system achieves massive parallelism, ensuring that the time required for parameter estimation is limited only by the measurement rate itself, rather than the processing time.
From an engineering perspective, the deployment of such continuous recalibration is an essential precursor to scalable quantum computing architectures, particularly those aiming for fault tolerance. Quantum Error Correction (QEC) codes mandate maintaining high-fidelity gates across thousands of interconnected qubits. If systematic errors—caused by accumulated parameter drift—exceed the threshold defined by the QEC code, the logical qubit cannot be protected. Therefore, the reliable, millisecond-scale calibration demonstrated here moves the technology closer to the operational requirements needed to run complex, fault-tolerant circuits necessary for algorithms such as Shor’s or implementing universal quantum computation.
