Quantum Machine Learning Gains Speed with Far Fewer Measurements Needed

Estimating quantum kernels previously required a substantial number of measurement shots, limiting the scale of quantum machine learning tasks. Jian Xu of the University of Science and Technology of China and colleagues have created AQKA, or Active Quantum Kernel Acquisition, a new method for allocating these shots that intelligently distributes them based on each entry’s contribution to the downstream classifier. AQKA is a new technique for efficiently using measurement resources in quantum kernel learning, a field utilising quantum computers to analyse data, similar to how traditional machine learning algorithms work, but potentially faster for certain problems. Quantum kernel learning aims to map classical data into a high-dimensional quantum feature space, where patterns may become more readily discernible by a machine learning algorithm. The computational intensity arises from the need to estimate the kernel matrix, which represents the similarity between data points in this quantum space, and this estimation traditionally demands a many quantum circuit executions to gather sufficient statistical data.

The method intelligently allocates resources by considering each entry’s impact on the final classification result, unlike existing methods which distribute shots equally within a selected subset of data. Demonstrations on IBM quantum hardware show performance improvements of up to 32 percentage points using AQKA, establishing it as a key approach when resources are limited. AQKA improves performance by intelligently considering each data point’s contribution to the final classification result. Demonstrations on IBM quantum hardware reveal improvements of up to 32 percentage points with AQKA, particularly when resources are limited, achieved through a process akin to a decision tree, selecting the best approach based on available resources and the task at hand. This adaptive strategy allows AQKA to outperform uniform sampling techniques, especially in scenarios where the number of available shots is significantly less than the size of the kernel matrix, a common limitation on near-term quantum devices.

AQKA demonstrates superior performance and efficient shot allocation for scalable quantum kernel

Up to a +32 point performance increase on an \texttt{ibm_pittsburgh} hardware kernel demonstrates AQKA’s superiority over existing quantum kernel learning methods with limited budgets. This improvement is particularly important because estimating quantum kernels typically requires Θ(N^2 S) measurement shots, where N represents the number of data points and S is the number of shots per entry, hindering progress on near-term quantum devices. Previously, scaling quantum machine learning tasks was severely restricted by this bottleneck. The computational cost scales quadratically with the number of data points, making it challenging to apply quantum kernel methods to large datasets. AQKA addresses this by reducing the number of shots required to achieve a comparable level of accuracy, thereby enabling the processing of larger datasets with limited quantum resources.

The development of AQKA introduces a regime decomposition, identifying optimal conditions for its use alongside Nyström-QKE and ShoFaR, offering a tailored approach to shot allocation based on specific computational needs. Nyström-QKE and ShoFaR are established subsampling techniques that reduce the number of kernel entries estimated, but they typically allocate shots uniformly across the selected entries. AQKA complements these methods by optimising the allocation within the chosen subset, further enhancing efficiency. Tests on the \texttt{ibm_pittsburgh} quantum computer yielded improvements of +26 to +32 points, while live online tests on \texttt{ibm_aachen} and \texttt{ibm_berlin} hardware showed an average performance increase of +17.0 ±4.8 points and +14.0 ±8.5 points respectively, demonstrating adaptability beyond simulations. These results highlight the robustness of AQKA across different quantum hardware platforms and varying levels of noise. However, these gains are currently limited to specific kernel ridge regression tasks and do not yet indicate how well AQKA will perform with more complex machine learning models or larger, real-world datasets. Future work will focus on extending AQKA to other machine learning algorithms and evaluating its performance on more challenging datasets.

Gradient-proportional shot allocation optimises quantum kernel classification

AQKA, or Active Quantum Kernel Acquisition, fundamentally reshapes how measurement shots are used in quantum kernel learning. It intelligently prioritises entries based on their influence on the final classification result, calculating each data point’s contribution to the overall outcome instead of distributing shots equally. This prioritisation relies on a closed-form acquisition theory, determining the optimal number of shots for each entry proportional to its gradient and kernel value, effectively focusing resources where they matter most. The gradient, in this context, represents the sensitivity of the classification result to changes in the kernel entry, while the kernel value indicates the overall importance of that entry. By allocating more shots to entries with high gradients and kernel values, AQKA ensures that the most informative data points are measured with greater precision. Experiments were conducted on \texttt{ibm_pittsburgh} (156-qubit Heron), \texttt{ibm_aachen}, and \texttt{ibm_berlin} hardware, providing data to validate the effectiveness of this approach. The use of multiple hardware platforms demonstrates the general applicability of the method and its resilience to variations in quantum device characteristics.

Adaptive quantum kernel learning balances performance with limited resources

AQKA is a new method engineered to optimise the use of precious measurement shots in quantum kernel learning, a vital step towards practical quantum machine learning. Nyström-QKE remains the preferred choice when quantum resources are plentiful, highlighting a persistent tension in the field between algorithmic sophistication and hardware limitations. Despite this, the approach delivers a significant advantage when measurement shots are limited, a common constraint for current quantum computers, improving performance by up to 25 percentage points in certain scenarios. This trade-off between algorithmic complexity and resource requirements is a central challenge in the development of near-term quantum algorithms.

The method introduces a new way to allocate measurement ‘shots’, the fundamental unit of computation on quantum computers, within quantum kernel learning. Prioritising data points that receive these shots based on their impact on classification moves beyond simply dividing resources equally. Establishing a regime decomposition is key, offering a guide to selecting the best allocation strategy depending on the specific problem and available hardware, allowing for a more nuanced application of the technique. This decomposition identifies scenarios where AQKA is most effective, providing a framework for choosing the optimal shot allocation strategy based on the characteristics of the dataset and the capabilities of the quantum hardware. The ability to adapt to different conditions is crucial for maximising the performance of quantum kernel learning in real-world applications.

AQKA is a new method that improves the efficiency of quantum kernel learning by strategically allocating measurement shots. The research demonstrates that prioritising data points based on their impact on classification yields better performance than uniform allocation when the number of shots is limited, with improvements of up to 25 percentage points observed. This adaptive approach was validated on quantum hardware including \texttt{ibm_pittsburgh}, \texttt{ibm_aachen}, and \texttt{ibm_berlin}. The authors established a regime decomposition to guide the selection of the optimal allocation strategy, offering a framework for balancing algorithmic sophistication with hardware constraints.

👉 More information
🗞 AQKA: Active Quantum Kernel Acquisition Under a Shot Budget
🧠 ArXiv: https://arxiv.org/abs/2605.14672

Stay current. See today’s quantum computing news on Quantum Zeitgeist for the latest breakthroughs in qubits, hardware, algorithms, and industry deals.
The Neuron

The Neuron

With a keen intuition for emerging technologies, The Neuron brings over 5 years of deep expertise to the AI conversation. Coming from roots in software engineering, they've witnessed firsthand the transformation from traditional computing paradigms to today's ML-powered landscape. Their hands-on experience implementing neural networks and deep learning systems for Fortune 500 companies has provided unique insights that few tech writers possess. From developing recommendation engines that drive billions in revenue to optimizing computer vision systems for manufacturing giants, The Neuron doesn't just write about machine learning—they've shaped its real-world applications across industries. Having built real systems that are used across the globe by millions of users, that deep technological bases helps me write about the technologies of the future and current. Whether that is AI or Quantum Computing.

Latest Posts by The Neuron: