Rydberg microwave sensors offer exceptional sensitivity but are often hampered by noise, limiting their practical applications. Zongkai Liu, Qiming Ren, and Wenguang Yang, all from the State Key Laboratory of Quantum Optics Technologies and Devices at Shanxi University, alongside Yanjie Tong et al., have developed a novel deep learning framework to address this challenge. Their research demonstrates a self-supervised method for denoising signals from these sensors, achieving noise suppression comparable to traditional multi-measurement averaging, but in a single measurement. This breakthrough eliminates the need for pristine reference signals, a significant advantage in real-world scenarios, and substantially reduces computational demands , denoising performance equivalent to averaging 10,000 datasets is achieved with a speed-up of three orders of magnitude. The team’s validation across various noise types and architectural analysis provides crucial insights for optimising deep learning techniques within Rydberg sensor systems.
The team’s validation across various noise types and architectural analysis provides crucial insights for optimising deep learning techniques within Rydberg sensor systems. The proposed framework addresses a significant challenge in quantum sensing by removing the requirement for clean reference signals, which are often impractical to obtain.
Training is conducted using two sets of noisy signals possessing identical statistical distributions, allowing the network to learn inherent noise characteristics. When evaluated on Rydberg sensing datasets, the framework outperforms existing methods in noise reduction and signal fidelity. This innovative approach bypasses the need for pristine reference signals, a common limitation in sensitive detection systems, by training the network on pairs of noisy signals sharing identical statistical distributions.
To validate this framework, scientists constructed a Rydberg atom microwave detection system, employing a room-temperature glass cell measuring 5 × 5 × 5 cm3 to confine cesium atoms. Atoms were excited from the ground state |6S1/2, F = 4⟩ to the Rydberg state |60S1/2 via an intermediate |6P3/2, F = 5⟩ state using an 852nm probe beam (Rabi frequency Ωc/2π = 9.06MHz) and a 510nm coupling beam (Rabi frequency Ωc/2π = 0.83MHz). A balanced differential detector, boasting a bandwidth of 1MHz and a common mode rejection ratio of 40 dB, processed the resulting spectral signals, while frequency stabilization was achieved using an ultra-stable cavity with a finesse of 2 × 105.
Radio frequency electric field detection harnessed an atomic heterodyne scheme, generating a 63MHz signal under test and a 63.05MHz local oscillator, applied via parallel brass electrodes (10cm × 8cm, 5cm spacing). The resulting ac Stark shift, proportional to the signal’s amplitude, was deduced from the probe transmission. The team employed a spectrum analyzer configured with a 10Hz resolution bandwidth, five averaging cycles, and a 49, 51kHz sweep range with 5Hz steps to analyze the signals.
Rydberg Sensor Noise Suppression via Deep Learning Scientists
Scientists have developed a self-supervised deep learning framework for Rydberg sensors that achieves single-shot noise suppression equivalent to the accuracy obtained through averaging 10,000 measurements. This breakthrough eliminates the requirement for clean reference signals, a significant challenge in many sensing applications. Experiments revealed that the framework surpasses the performance of traditional denoising techniques like wavelet transform and Kalman filtering.
The system utilizes a 852nm probe beam with a Rabi frequency of 9.06MHz and a 510nm coupling beam at 0.83MHz to excite atoms to the 60S1/2 Rydberg state, enabling the detection of RF electric fields through the ac Stark shift. The balanced differential detector employed has a bandwidth of 1MHz and a common mode rejection ratio of 40 dB, ensuring high-fidelity signal acquisition. The research team constructed a Rydberg atom microwave detection system, confining cesium atoms within a 5 × 5 × 5 cm3 glass cell, to acquire the necessary intermediate frequency (IF) signals for validating the deep learning methods.
The training process focused on minimizing the discrepancy between the network’s output and experimental labels, formalized as a constrained minimization problem. The dataset comprised pairs of measurements, xtrain = x + n1 and ytrain = y + n2, where x and y represent clean signals and n1 and n2 are independent noise samples. This self-supervised approach, contrasting with traditional autoencoders, eliminates the need for clean reference signals, using another noisy measurement as the label.
The training process involved optimizing weight parameters to minimize a loss function, utilizing 4000 sets of noisy spectral data, each containing 1000 consecutive points, partitioned into training, labeling, and testing sets at a 4:4:2 ratio. The researchers applied varying levels of attenuation to the IF signal, simulating challenging electromagnetic environments, and demonstrated the ability to recover signals submerged in noise using the deep learning framework. Measurements confirm that the optimized weights obtained after training allow for accurate prediction of unseen test inputs, closely matching the results obtained from traditional 10,000-set averaging.
Rydberg Noise Suppression via Self-Supervised Learning
This work demonstrates a novel self-supervised deep learning framework designed to suppress noise in Rydberg sensor data, achieving performance equivalent to averaging 10,000 measurements in a single processing step. The framework operates without requiring clean reference signals, instead relying on training with pairs of noisy signals sharing identical statistical distributions. Quantitative validation, utilising time-domain signals from Rydberg atom microwave heterodyne detection, confirms the superiority of this approach over established methods like wavelet transforms and Kalman filtering, as evidenced by significantly lower mean squared error values.
The research further extends to an examination of architectural complexity, comparing U-Net and Transformer models. Results indicate that the Transformer model, despite its larger size and longer training time, delivers improved denoising performance in both time and frequency domains. This work quantifies the trade-off between complexity and performance for both U-Net and Transformer architectures, providing valuable guidance for optimizing deep learning-based denoising in future Rydberg sensor systems and opening possibilities for more sensitive and efficient sensing technologies.
The authors acknowledge limitations relating to the specific noise profiles investigated and the computational demands of the Transformer architecture. Future work could explore strategies to mitigate these demands and broaden the scope of noise types addressed, potentially leading to even more efficient and robust signal processing for Rydberg sensor systems.
👉 More information
🗞 Self-Supervised Learning with Noisy Dataset for Rydberg Microwave Sensors Denoising
🧠 ArXiv: https://arxiv.org/abs/2601.01924
