Researchers at CNRS, Thales, and Université Paris-Saclay have demonstrated a new approach to quantum reservoir computing using a simple system: a single transmon coupled to a readout resonator. This proof-of-concept realization achieves classification by encoding input data in the amplitude of a coherent drive and measuring the resulting cavity state in the Fock basis, a method differing from typical quantum computing techniques. The experiment successfully classified two classical tasks with fewer measured features than required by conventional classical neural networks, suggesting a potential hardware advantage for quantum machine learning. The researchers note that various implementations of quantum reservoir computing have been explored in simulations, but experimental implementations have been scarce until now, enabling scalable and generalized quantum machine-learning models.
Circuit-Quantum-Electrodynamics System for Reservoir Computing
This proof-of-concept realization opens new avenues for hardware-efficient quantum neural networks. Danijela Marković, the contact author, explained that their design obtains a large number of nonlinear features from a single physical system, emphasizing its efficiency. Numerical simulations supported these findings, revealing that increasing Kerr nonlinearity within the system improved the reservoir’s performance, suggesting a pathway for optimization and refinement of future designs. The researchers made their data and code openly available via Zenodo, identified by https://zenodo.org/records/15745370, to promote reproducibility and further investigation within the quantum machine learning community. Marković stated that their work demonstrates a hardware-efficient quantum neural-network implementation that can be scaled up and generalized to other quantum machine-learning models.
Fock Basis Measurement Enables Nonlinear Feature Extraction
The team successfully implemented a proof-of-concept quantum reservoir using only a single transmon, a type of superconducting qubit, coupled to a readout resonator, challenging the previous belief that more intricate architectures were essential for this type of computation. This achievement relies on a novel approach to data input and feature extraction. Further analysis, supported by numerical simulations, revealed that increasing Kerr nonlinearity, a property of the superconducting circuit, benefited the reservoir’s performance. The experimental results, published by the American Physical Society (Phys. Applied 25 , 054005, Published 4 May, 2026), are openly available via Zenodo with the identifier https://zenodo.org/records/15745370, and demonstrate a scalable implementation that could be generalized to other quantum machine-learning models, potentially leading to more compact and powerful quantum neural networks.
Kerr Nonlinearity Improves Reservoir Performance
Researchers at Laboratoire Albert Fert, CNRS, are demonstrating efficiency in quantum reservoir computing using simple hardware. This achievement challenges the idea that a large number of qubits are necessary to generate sufficient nonlinear features for effective computation. This allowed them to classify two classical tasks while utilizing fewer measured features than are commonly required by classical neural networks. Further investigation revealed that enhancing Kerr nonlinearity, a phenomenon affecting light propagation in certain materials, positively impacted the reservoir’s performance. Numerical simulations corroborated these experimental findings, indicating that increased Kerr nonlinearity leads to improved classification accuracy.
