Quantum Computing Yields Comparable Accuracy with Six Models

Researchers are exploring the potential of quantum computing to address challenges in analysing complex medical datasets. Luke Antoncich, Yuben Moodley and Ugo Varetto, working with colleagues from the Centre for Quantum Information, Simulation and Algorithms at The University of Western Australia, the Centre for Respiratory Health at the University of Western Australia, the Pawsey Supercomputing Research Centre, and QuEra Computing Inc, demonstrate a quantum reservoir computing approach applied to a small, complex medical dataset. This collaborative effort, involving Jonathan Wurtz, Jing Chen and Pascal Jahan Elahi, alongside Casey R. Myers, represents a significant step towards utilising near-term quantum devices for biomarker-based clinical outcome prediction. Their findings reveal that quantum features, particularly when generated through hardware execution on the Aquila processor, can offer improved accuracy and stability compared to classical machine learning, potentially due to a regularising effect arising from the inherent noise and structure of the quantum system.

Predicting patient outcomes from complex medical data remains a major challenge for modern healthcare. Now, a novel application of quantum computing techniques offers a potential route to improved diagnostics and personalised treatment. Biomarker-based predictions are often hampered by nonlinear relationships between indicators, correlations among features, and the limited size of many patient datasets. This work introduces a novel approach using quantum reservoir computing (QRC), a technique that harnesses the principles of quantum mechanics to process information, and evaluates its performance on a real-world medical dataset.

The study investigates both simulated and actual hardware implementations of QRC, utilising the neutral-atom Rydberg processor named Aquila. Initial findings reveal that models trained on quantum features generated through simulation achieve comparable test accuracies to those trained on conventional, classical features, yet exhibit increased variability across different data partitions, suggesting a tendency towards overfitting.

Significantly, when the QRC is executed on the Aquila hardware, the resulting models exhibit enhanced robustness and, in many cases, statistically significant improvements in test accuracy compared to their simulated counterparts. This suggests that the physical process of quantum computation introduces a form of regularisation, preventing the model from memorising the training data and improving its ability to generalise to unseen cases.

The research team delved into the underlying mechanisms driving this improvement, examining the statistical properties of the quantum features generated by both the hardware and the simulation. They discovered that hardware execution applies a distinct, time-dependent transformation to the feature distributions, compressing them towards the mean and reducing the mutual information between the hardware-generated and simulated features.

This structured transformation appears to reshape the quantum representation of the data, effectively acting as a regulariser and enhancing the model’s predictive power. This study demonstrates the potential of QRC, particularly when implemented on actual quantum hardware, to overcome limitations of classical machine learning in biomarker-based prediction.

The observed regularising effect offers a promising pathway towards building more reliable and accurate diagnostic tools, especially in scenarios where data is scarce or complex. Future work will focus on refining the quantum feature generation process and exploring the application of this technique to a wider range of medical datasets and clinical challenges.

Quantum hardware regularisation improves machine learning performance with biomarker data

Initial analysis of biomarker data reveals that machine-learning models trained on quantum reservoir computation (QRC) features achieve mean test accuracies comparable to those trained on classical features, yet exhibit notably higher training accuracies and increased variability across different data splits. This suggests a potential for overfitting when utilising emulated quantum features.

However, a crucial distinction emerges when comparing hardware execution of QRC to its noiseless emulation; models demonstrate increased robustness across data splits and, in many configurations, statistically significant improvements in mean test accuracy. This combination of enhanced accuracy and stability points towards a regularising effect induced by the hardware execution process itself.

Further investigation into the origin of this behaviour focuses on the statistical properties of the quantum features generated by both hardware and emulation. Hardware execution applies a structured, time-dependent transformation to these features, characterised by a compression toward the mean and a progressive reduction in mutual information relative to the emulated counterparts.

Specifically, the hardware implementation consistently compresses the distribution of quantum features, effectively reducing the range of values observed. This compression is accompanied by a measurable decrease in mutual information, indicating a systematic reshaping of the quantum feature representation. The observed reduction in mutual information between hardware and emulated features suggests that hardware-specific effects are systematically filtering or simplifying the information encoded within the quantum features.

This reshaping appears to act as a form of regularisation, preventing the model from memorising noise or spurious correlations present in the training data. The study employed six classical machine-learning models, XGBoost, Random Forest, Gradient Boosted Trees, Logistic Regression, Support Vector Machine, and k-Nearest Neighbours, all implemented using established Python packages such as scikit-learn and xgboost. Hyperparameter optimisation was conducted using a nested cross-validation framework with stratified K-fold cross-validation (K = 10) to ensure unbiased estimates of generalisation performance.

Quantum reservoir computing with Rydberg atoms for biomarker analysis

A neutral-atom Rydberg processor, specifically the \textit{Aquila} device, served as the core platform for implementing quantum reservoir computing (QRC) in this study. QRC leverages the inherent dynamics of a quantum system to map classical data into a quantum feature space, bypassing the need for complex quantum circuit optimisation. Classical input data, representing biomarker information, were initially encoded into the quantum system’s state, preparing it for time evolution governed by a defined Hamiltonian.

The expectation values of strategically chosen observables were then measured at multiple discrete time steps, constructing a quantum feature vector for each input. To establish a comparative baseline, six classical machine-learning models were trained and evaluated on the complete biomarker dataset. Subsequently, SHAP (SHapley Additive exPlanations) values were calculated to identify and rank the most informative subsets of biomarkers, reducing dimensionality and focusing on key predictive features.

These selected biomarker subsets were then used as inputs for both emulated and hardware-executed QRC, allowing for a direct comparison of performance. Emulation of the quantum reservoir was performed using classical computation to simulate the time evolution of the quantum system, providing a noiseless control for assessing the impact of hardware-induced effects.

Hardware execution involved directly running the QRC protocol on the \textit{Aquila} processor, exposing the quantum features to inherent device noise and limitations. This approach enabled investigation into whether the imperfections of real quantum hardware could act as a regularising force, potentially mitigating overfitting observed with emulated data and improving generalisation performance. The choice of QRC was motivated by its potential to capture complex, nonlinear relationships within the biomarker data, a challenge for traditional machine-learning techniques.

Quantum noise enhances medical diagnosis model performance

Scientists have long sought robust machine learning models for medical diagnosis, but biological data presents unique challenges. The inherent complexity of living systems generates datasets riddled with noise, subtle correlations, and limited sample sizes, frequently frustrating attempts to build reliable predictive tools. This research offers a compelling, if cautious, step forward by exploring the potential of quantum reservoir computing (QRC) to address these difficulties.

The team’s work isn’t about achieving a dramatic leap in accuracy on benchmark datasets, but about demonstrating a surprising benefit from the imperfections of real quantum hardware. The finding that models trained on features generated by an actual quantum processor, despite its inherent noise, exhibit greater stability and, crucially, improved test accuracy compared to their simulated counterparts is particularly noteworthy.

This suggests that the noise isn’t simply a hindrance, but a form of regularisation, preventing the model from overfitting to the training data. It’s a counterintuitive result, hinting that embracing the messy reality of physical systems can sometimes yield more robust outcomes than striving for perfect, but ultimately brittle, simulations. However, interpreting these gains requires careful consideration.

The researchers acknowledge the potential for information leakage when using a fixed feature selection across data splits, limiting the generalisability of their findings. Moreover, the specific biomarker dataset used, while valuable, represents only one application. The next crucial step will be to test this approach on a wider range of medical datasets and explore whether the observed regularisation effect holds true across different biological contexts. If confirmed, this could pave the way for a new paradigm in machine learning, one where the quirks of quantum hardware are harnessed to build more reliable and clinically useful predictive models.

👉 More information
🗞 Quantum Reservoir Computing with Neutral Atoms on a Small, Complex, Medical Dataset
🧠 ArXiv: https://arxiv.org/abs/2602.14641

Rohail T.

Rohail T.

As a quantum scientist exploring the frontiers of physics and technology. My work focuses on uncovering how quantum mechanics, computing, and emerging technologies are transforming our understanding of reality. I share research-driven insights that make complex ideas in quantum science clear, engaging, and relevant to the modern world.

Latest Posts by Rohail T.:

Accurate Quantum Sensing Now Accounts for Real-World Limitations

Accurate Quantum Sensing Now Accounts for Real-World Limitations

March 13, 2026
Quantum Error Correction Gains a Clearer Building Mechanism for Robust Codes

Quantum Error Correction Gains a Clearer Building Mechanism for Robust Codes

March 10, 2026

Protected: Models Achieve Reliable Accuracy and Exploit Atomic Interactions Efficiently

March 3, 2026