Quantum Machine Learning Achieves Scalability with 50 Qubit Shallow-Circuit Supervision

Quantum computing continues to seek practical applications in machine learning, but challenges remain in efficiently loading classical data and training algorithms on current hardware. Luca Candelori from Qognitive, Inc., Swarnadeep Majumder and Antonio Mezzacapo from IBM Quantum, alongside Javier Robledo Moreno et al., have now demonstrated a method to overcome these hurdles. Their research focuses on a linear Hamiltonian-based machine learning approach, representing classical data in a compact form using ground state problems for k-local Hamiltonians. By employing a sample-based Krylov diagonalization technique, the team successfully computed low-energy states and trained parameters to represent classical datasets. Experiments performed on IBM’s Heron processor, utilising up to 50 qubits, showcase the potential efficacy and scalability of this novel technique for quantum machine learning.

Quantum machine learning has long promised transformative advances in data analysis, yet practical implementation has remained elusive due to fundamental obstacles such as a steep quantum cost for the loading of classical data and poor trainability of many quantum machine learning algorithms designed for near-term quantum hardware. This work demonstrates a method to overcome these obstacles by employing a linear Hamiltonian-based machine learning approach, providing a compact quantum representation of classical data via ground state problems for k-local Hamiltonians. Researchers utilise the sample-based Krylov quantum diagonalization method to compute low-energy states of the data Hamiltonians, paving the way for practical application and representing a significant contribution towards realising the potential of quantum machine learning with currently available quantum hardware.

Data Embedding via Data-Induced Hamiltonians

Researchers pioneered a linear Hamiltonian-based method for compact classical data representation, leveraging ground state problems for k-local Hamiltonians to effectively encode data within the quantum system itself. This innovative approach circumvents the steep quantum cost traditionally associated with data loading and mitigates trainability issues common in near-term quantum algorithms. The team engineered a system where each data point induces a unique Hamiltonian, constructed from fixed feature operators and corresponding feature values, allowing for expressive data embedding suitable for current quantum processors. To approximate ground states for these data-specific Hamiltonians, scientists developed and implemented the sample-based Krylov quantum diagonalization (SKQD) algorithm.

This technique draws quantum samples from Krylov quantum states, subsequently performing classical diagonalization within the subspace defined by the sampled bitstrings, providing a provable convergence for approximating ground state energies, contingent on the assumption of sparse ground states, and delivering both ground state and low-energy spectrum approximations. Experiments employed the IBM Heron quantum processor, utilising up to 50 qubits to demonstrate the efficacy and scalability of the methodology. The research harnessed local gradients to train the parameters of these data Hamiltonians, enabling the expression of classical datasets within the quantum framework. This process effectively aligns feature and label operators with the underlying learning task, facilitating accurate classification based on ground state measurements. By learning the data encoding as a Hamiltonian, the study avoids the need for pre-defined ansatzes, sidestepping challenges associated with barren plateaus and unfavorable local minima in variational quantum algorithms, resulting in a robust and efficient embedding of classical data resistant to hardware noise and classical simulability.

Efficient Quantum Data Embedding with Linear Hamiltonians Scientists

Scientists achieved a breakthrough in quantum machine learning by demonstrating a linear Hamiltonian-based method capable of efficiently embedding classical data onto quantum hardware. The research team successfully trained a model on benchmark datasets using up to 50 qubits of an IBM Heron processor, overcoming longstanding obstacles in quantum data analysis, and utilising a compact representation of classical data through ground state problems for k-local Hamiltonians. The study focused on approximating ground states using the sample-based Krylov diagonalization (SKQD) algorithm, a technique with provable convergence for sparse ground states. Measurements confirm that the SKQD algorithm effectively computes low-energy states of data Hamiltonians, with parameters trained to express classical datasets through local gradients, allowing for the learning of data encoding as a Hamiltonian constructed from feature operators and their corresponding feature values, resulting in a distinct Hamiltonian for each data point.

The team recorded non-vanishing gradients during training on the Heron processor, a crucial indicator of successful model learning. Results demonstrate the ability to train a model on a binary classification problem with ten features, showcasing scalability on current pre-fault-tolerant quantum computers. The workflow operates on sparse ground states obtained from SKQD, probing only a small subspace of the full Hilbert space and minimizing computational demands. Classical features are embedded as weighted combinations of Pauli terms within single- and two-qubit gates, generating Krylov states and facilitating efficient data representation. This work introduces a novel approach to quantum machine learning, encoding data into k-local Hamiltonians to achieve a compact representation and avoid optimization collapse. Tests prove the method’s robustness to hardware noise and its potential to operate beyond the limitations of classically simulable circuits, delivering a pathway towards practical machine learning on near-term quantum processors, with applications spanning finance, healthcare, and general data science.

Hamiltonian Training Achieves Scalable Quantum Machine Learning Researchers

Researchers have demonstrated successful training of a linear Hamiltonian-based machine learning model utilising current quantum processors. This work overcomes obstacles in quantum machine learning related to data loading and algorithm trainability by representing classical data as ground state problems for k-local Hamiltonians. The team employed a sample-based Krylov diagonalization method to compute these low-energy states, training the Hamiltonian parameters to effectively express classical datasets through local gradients. Experiments conducted on benchmark datasets, utilising up to 50 qubits on a Heron processor, confirm the efficacy and scalability of this approach. Notably, the study found that achieving high accuracy did not necessarily require a large number of energy terms in the gradient calculation, suggesting a potentially efficient use of quantum resources. Future research will focus on hyperparameter optimisation and evaluation on larger, more complex datasets, alongside a systematic comparison against classical methods to pinpoint scenarios where Hamiltonian-based encodings offer a distinct advantage.

👉 More information
🗞 Shallow-circuit Supervised Learning on a Quantum Processor
🧠 ArXiv: https://arxiv.org/abs/2601.03235

Rohail T.

Rohail T.

As a quantum scientist exploring the frontiers of physics and technology. My work focuses on uncovering how quantum mechanics, computing, and emerging technologies are transforming our understanding of reality. I share research-driven insights that make complex ideas in quantum science clear, engaging, and relevant to the modern world.

Latest Posts by Rohail T.:

Topology-aware Machine Learning Enables Better Graph Classification with 0.4 Gain

Llms Enable Strategic Computation Allocation with ROI-Reasoning for Tasks under Strict Global Constraints

January 10, 2026
Lightweight Test-Time Adaptation Advances Long-Term EMG Gesture Control in Wearable Devices

Lightweight Test-Time Adaptation Advances Long-Term EMG Gesture Control in Wearable Devices

January 10, 2026
Deep Learning Control AcDeep Learning Control Achieves Safe, Reliable Robotization for Heavy-Duty Machineryhieves Safe, Reliable Robotization for Heavy-Duty Machinery

Generalist Robots Validated with Situation Calculus and STL Falsification for Diverse Operations

January 10, 2026