Quantum Machine Learning Gains Robustness with Shallower Circuits

Aakash Ravindra Shinde and colleagues at University of Helsinki reveal a need for improved evaluation of Variational Quantum Algorithms (VQAs), as current practice relies heavily on classical simulation rather than testing on actual Noisy Intermediate-Scale Quantum (NISQ) devices. Their work addresses a key gap in understanding what constitutes a ‘shallow’ quantum circuit and how to optimise circuit depth for VQCs operating on noisy platforms. The team propose a new metric based on relative entropy and demonstrate a correlation between this metric, transpilation circuit depth, and performance differences observed when comparing simulations to results obtained on real quantum hardware. This research provides empirical evidence across various VQC techniques, datasets, and quantum devices, offering valuable insight into building more reliable and reproducible quantum machine learning algorithms.

Relative entropy and circuit depth predict noisy quantum classifier performance

A correlation allowing prediction of Variational Quantum Classifier (VQC) performance has, for the first time, been established. Previously, simulations offered no reliable indication of results on noisy quantum hardware. This new metric, based on relative entropy and circuit depth, demonstrates that a VQC’s performance on a simulator correlates with its performance on a noisy quantum device, enabling classical pre-evaluation. In particular, this indicates circuit depth alone is insufficient to define a ‘shallow’ circuit, highlighting the importance of data separability measured by relative entropy, a concept quantifying how distinguishable different data categories are for the algorithm. Relative entropy, mathematically defined as the divergence between the probability distributions of different classes, provides a measure of how easily the VQC can differentiate between them; higher relative entropy suggests clearer separation and potentially better performance.

This advancement addresses a key gap in understanding VQC reproducibility and offers a pathway to optimise models before utilising expensive and time-consuming quantum resources. Diverse ansatzes, the initial quantum circuit design, and model implementations were included in experiments, alongside datasets ranging in complexity and size, ensuring broad applicability of the findings. These ansatzes included both hardware-efficient and problem-specific designs, allowing for a comprehensive assessment of their impact on performance predictability. Datasets varied from simple binary classification problems to more complex multi-class scenarios, with sizes ranging from a few hundred to several thousand data points. Confirmed across multiple noisy quantum devices from different providers, the predictive power of this metric held true, each with unique transpilation methods, qubit mapping strategies, and noise levels. The devices included those from IBM Quantum, Rigetti, and IonQ, each exhibiting distinct characteristics in terms of qubit connectivity, gate fidelity, and coherence times.

The strong performance remained consistent regardless of the specific gate sets employed or the architectural configuration of the quantum hardware used. While the metric successfully predicts performance differences, it currently offers no insight into the absolute performance level achievable on noisy hardware, leaving a gap between prediction and practical, reliable quantum classification. Future work will focus on calibrating the metric to estimate expected accuracy, potentially through comparisons with benchmark datasets and error models. This calibration would involve training the metric on a range of VQCs with known performance on various quantum devices, allowing for the development of a predictive model that can estimate absolute accuracy based on relative entropy and circuit depth. Furthermore, exploring the impact of different noise models on the metric’s accuracy is crucial for improving its robustness and generalizability.

Data properties and circuit depth determine variational quantum classifier performance

Predicting how well a quantum algorithm will perform on real hardware remains a persistent hurdle in the field of quantum machine learning. While this offers a valuable metric linking data characteristics and circuit complexity to observed performance, it highlights a broader tension within the community. Increasing focus is being placed on mitigating errors through sophisticated techniques like error mitigation and pulse-level control. However, these methods often demand significant computational overhead and expertise, potentially negating the benefits of using a quantum computer in the first place. Error mitigation techniques, such as zero-noise extrapolation and probabilistic error cancellation, aim to reduce the impact of noise on quantum computations, but they require careful calibration and can significantly increase the computational cost.

Acknowledging current efforts to build ever more complex error correction into quantum systems is vital, yet this offers a complementary and immediately useful diagnostic tool. It provides a way to assess the suitability of a quantum machine learning model, specifically Variational Quantum Classification algorithms, before investing significant resources in running it on actual, noisy quantum computers. This metric allows for a classical, pre-evaluation of VQC performance, potentially reducing reliance on resource-intensive tests on actual quantum hardware, and moves beyond defining ‘shallow’ circuits by depth alone. The research demonstrates that the ease with which a quantum algorithm distinguishes between data categories, termed relative entropy, is strongly linked to the complexity of the quantum circuit required to run it. This approach offers a valuable alternative to solely focusing on circuit depth, acknowledging the importance of data clarity and providing a more holistic assessment of model feasibility. Circuit depth, measured by the number of quantum gates, is a key factor influencing the accumulation of errors in NISQ devices; however, a circuit with a small depth may still perform poorly if the data is inherently difficult to classify.

The implications of this work extend beyond simply improving VQC performance. By providing a means to assess model feasibility prior to execution, it facilitates more efficient resource allocation and accelerates the development of practical quantum machine learning applications. This is particularly important in areas such as drug discovery, materials science, and financial modelling, where quantum algorithms have the potential to offer significant advantages over classical methods. Furthermore, the metric could be integrated into automated machine learning (AutoML) pipelines, enabling the selection of optimal VQC architectures and hyperparameters for specific datasets and quantum hardware platforms. The ability to predict performance based on classical calculations also opens up the possibility of developing new, more robust VQC designs that are inherently less susceptible to noise. This research, therefore, represents a significant step towards realising the full potential of variational quantum algorithms in the NISQ era, bridging the gap between theoretical promise and practical implementation.

The research revealed a strong link between a quantum model’s ability to differentiate between data, measured using relative entropy, circuit complexity and actual performance on noisy quantum devices. This matters because it offers a way to predict how well a Variational Quantum Classification (VQC) model will function on real hardware before running it, potentially reducing the need for costly tests. The team found that circuit depth alone isn’t a reliable indicator of success, highlighting the importance of clear data for effective classification. Future work could integrate this relative entropy metric into automated machine learning systems to optimise VQC designs for specific datasets and quantum computers.

👉 More information
🗞 The Average Relative Entropy and Transpilation Depth determines the noise robustness in Variational Quantum Classifiers
🧠 ArXiv: https://arxiv.org/abs/2603.21300

Rusty Flint

Rusty Flint

Rusty is a quantum science nerd. He's been into academic science all his life, but spent his formative years doing less academic things. Now he turns his attention to write about his passion, the quantum realm. He loves all things Quantum Physics especially. Rusty likes the more esoteric side of Quantum Computing and the Quantum world. Everything from Quantum Entanglement to Quantum Physics. Rusty thinks that we are in the 1950s quantum equivalent of the classical computing world. While other quantum journalists focus on IBM's latest chip or which startup just raised $50 million, Rusty's over here writing 3,000-word deep dives on whether quantum entanglement might explain why you sometimes think about someone right before they text you. (Spoiler: it doesn't, but the exploration is fascinating)

Latest Posts by Rusty Flint:

Quantum Networks Demonstrate Losses Exceeding 100 Percent through Spatial-Mode Mixing

Quantum Networks Demonstrate Losses Exceeding 100 Percent through Spatial-Mode Mixing

March 24, 2026
Quantum Reality Challenged by New 10-Sigma Experiment

Quantum Reality Challenged by New 10-Sigma Experiment

March 24, 2026
Atoms Linked to Light on a Nanofiber Promise Scalable Quantum Tech

Atoms Linked to Light on a Nanofiber Promise Scalable Quantum Tech

March 24, 2026