Quantum Neural Networks With Quantum Perceptrons (QPS) Use Fewer Qubits

Researchers have made breakthroughs in quantum machine learning models, paving the way for more efficient and scalable applications. This research from Caltech, NVIDIA and Harvard demonstrates a critical discovery. A specific type of quantum circuit, known as a Quantum Perceptron (QP), can approximate continuous functions with high accuracy. It uses fewer qubits than previously thought. This achievement is crucial for building reliable and efficient quantum neural networks.

 

 (a) Individual 87Rb atoms are trapped using optical tweezers (vertical red beams) and and arranged into defect-free arrays with probabilistic configuration. Coherent interactions Vij between the atoms are facilitated by exciting them to a Rydberg state with interaction strength Ω and detuning ∆. (b) The schematic shows the ground-state phase diagram of the Hamiltonian. It highlights phases Z2 and Z3 with different broken symmetries. These are based on interaction range and detuning. A dataset of noisy states from these phases serves as the perceptron’s input. (c) The QP comprises N input qubits and a single output qubit. The qubits undergo evolution governed by a Hamiltonian, ensuring that the probability of the output qubit being in the |0⟩0 state is a nonlinear function of the state of the input qubits. Following this evolution, the output qubit is measured.
(a) Individual 87Rb atoms are trapped using optical tweezers (vertical red beams) and and arranged into defect-free arrays with probabilistic configuration. Coherent interactions Vij between the atoms are facilitated by exciting them to a Rydberg state with interaction strength Ω and detuning ∆. (b) The schematic shows the ground-state phase diagram of the Hamiltonian. It highlights phases Z2 and Z3 with different broken symmetries. These are based on interaction range and detuning. A dataset of noisy states from these phases serves as the perceptron’s input. (c) The QP comprises N input qubits and a single output qubit. The qubits undergo evolution governed by a Hamiltonian, ensuring that the probability of the output qubit being in the |0⟩0 state is a nonlinear function of the state of the input qubits. Following this evolution, the output qubit is measured.

The research team designed the QP architecture. It uses Rydberg atom arrays to implement the perceptron model. This model is a fundamental component of machine learning algorithms. The researchers also showed that the QP can be used with reservoir computing. This is a technique inspired by classical random feature networks. It enhances learning. This work has significant implications for developing practical quantum machine learning applications. It could lead to breakthroughs in areas such as image recognition and natural language processing.

The authors have built upon the foundation laid by Gonon and Jacquier. They demonstrated that parameterized quantum circuits can approximate continuous functions bounded in L1 norm up to an error of order n^{-1/2}. Here, the number of qubits scales logarithmically with n. Specifically, they showed that a quantum neural network with O(ϵ^{-2}) weights and O([log2(ϵ^{-1})]) qubits suffices to achieve accuracy ϵ > 0 when approximating functions with integrable Fourier transform.

The manuscript presents an advancement in the field of Quantum Machine Learning (QML). It introduces the concept of Quantum Perceptrons (QPs). These are implementable using a Quantum Processor (QP). The authors show that QPs can approximate classical functions. The error scales as n^{-1/2}. This ensures that no curse of dimensionality occurs.

The authors also explore the connection between QPs and reservoir computing, drawing parallels with classical random feature networks. This confluence of error bounds in classical and quantum settings strengthens our understanding of quantum neural networks’ computational universality. It also provides a roadmap for their efficient implementation.

The manuscript concludes by highlighting the significance of QPs as reliable building blocks for scalable quantum neural networks. The authors propose experimental strategies for encoding QPs on arrays of Rydberg atoms, including single-species and dual-species approaches. They also discuss potential avenues for future research. These areas include experimental validation. They also look at incorporating multiple output qubits and integrating quantum reservoir computing with QPs.

This work represents a crucial step forward in developing QML models, offering a promising architecture for scalable and efficient quantum neural networks.

Schematic of a QP that operates on N input qubits and 2 output qubits (right) which evolve under the Hamiltonian in Eq. (14). The circuit (left) begins with the preparation of the input state |Φ(x)⟩ (green) where each input x is encoded into the quantum state. The input-output system then evolves via a series of single-qubit rotations along the x-, y- and z-axes, interspersed with controlled entangling gates, as indicated by the Hamiltonian HP for a time τ . Each output qubit interacts independently with all input qubits, ensuring that their evolution is solely influenced by the input qubits without interacting with each other. Finally, the output of the circuit is measured to obtain the expectation values, which are then used to calculate a loss function. This loss function compares the obtained expectation values to the target function y˜(x) defined by the labels. The parameters of the QP are updated via gradient descent to minimize this loss, optimizing the performance of the perceptron for multi-class classification tasks.
Schematic of a QP that operates on N input qubits and 2 output qubits (right) which evolve under the Hamiltonian in Eq. (14). The circuit (left) begins with the preparation of the input state |Φ(x)⟩ (green) where each input x is encoded into the quantum state. The input-output system then evolves via a series of single-qubit rotations along the x-, y- and z-axes, interspersed with controlled entangling gates, as indicated by the Hamiltonian HP for a time τ . Each output qubit interacts independently with all input qubits, ensuring that their evolution is solely influenced by the input qubits without interacting with each other. Finally, the output of the circuit is measured to obtain the expectation values, which are then used to calculate a loss function. This loss function compares the obtained expectation values to the target function y˜(x) defined by the labels. The parameters of the QP are updated via gradient descent to minimize this loss, optimizing the performance of the perceptron for multi-class classification tasks.
More information
External Link: Click Here For More
Quantum News

Quantum News

As the Official Quantum Dog (or hound) by role is to dig out the latest nuggets of quantum goodness. There is so much happening right now in the field of technology, whether AI or the march of robots. But Quantum occupies a special space. Quite literally a special space. A Hilbert space infact, haha! Here I try to provide some of the news that might be considered breaking news in the Quantum Computing space.

Latest Posts by Quantum News:

IBM Remembers Lou Gerstner, CEO Who Reshaped Company in the 1990s

IBM Remembers Lou Gerstner, CEO Who Reshaped Company in the 1990s

December 29, 2025
Optical Tweezers Scale to 6,100 Qubits with 99.99% Imaging Survival

Optical Tweezers Scale to 6,100 Qubits with 99.99% Imaging Survival

December 28, 2025
Rosatom & Moscow State University Develop 72-Qubit Quantum Computer Prototype

Rosatom & Moscow State University Develop 72-Qubit Quantum Computer Prototype

December 27, 2025