Quantum Neural Network Expressivity Quantified and Optimised via Reinforcement Learning.

Quantum neural networks (QNNs) represent a potentially powerful computational paradigm; however, realising their full capabilities hinges on accurately defining and maximising their expressive power, which essentially refers to their ability to represent complex functions. A key obstacle lies in quantifying this expressivity, a challenge addressed in research published by Yao et al, which introduces a novel metric, termed ‘effective rank’, to characterise the genuinely independent parameters within a quantum circuit. This work, detailed in their article ‘Learning to Maximize Quantum Neural Network Expressivity via Effective Rank’, demonstrates how this metric can be used both to assess circuit performance rigorously and to guide the automated design of more expressive architectures using a reinforcement learning framework. The research originates from the International Quantum Academy and offers a practical approach to optimising QNNs, potentially enhancing their application in areas such as machine learning and optimisation problems.

Current research into quantum neural networks (QNNs) concentrates on translating the theoretical advantages of quantum computation into demonstrable performance gains for machine learning tasks. A central theme revolves around quantifying and maximising the expressivity of these networks, which refers to their capacity to represent complex functions. Traditional metrics, such as parameter count, prove inadequate; researchers now employ measures like effective rank, assessing the number of genuinely independent degrees of freedom within a QNN, rather than simply the total number of adjustable parameters. The objective is to design circuits that achieve maximum expressivity given constraints imposed by circuit depth, qubit connectivity, and input data characteristics.

Architectural innovation draws heavily from established classical neural network paradigms. Adaptations of multilayer perceptrons, recurrent neural networks, and convolutional neural networks are being explored for quantum implementation, aiming to leverage quantum phenomena such as superposition and entanglement to enhance performance. This exploration extends beyond direct analogues, with researchers investigating entirely novel QNN architectures to exploit the unique capabilities of quantum systems. The diversity of approaches reflects a lack of consensus on the optimal structure for a quantum neural network, and a desire to identify designs that outperform classical counterparts for specific problem domains.

A significant constraint on current development is the limitations of near-term quantum hardware. The current era of quantum computing is characterised by noisy intermediate-scale quantum (NISQ) devices, which possess a limited number of qubits and are susceptible to errors. Consequently, research prioritises the design of QNNs that are resilient to noise and can be implemented on these constrained platforms. This necessitates a focus on circuit depth reduction and the development of error mitigation strategies.

Beyond simply achieving high expressivity, ensuring trainability remains a critical challenge. Trainability refers to the ability to effectively adjust the network’s parameters during the learning process to minimise error. A related issue is the phenomenon of barren plateaus, where the gradients used to update parameters vanish exponentially with the number of qubits, effectively halting the learning process. Researchers are actively investigating techniques to mitigate barren plateaus and improve the optimisation landscape of QNNs, including careful initialisation strategies and the use of alternative optimisation algorithms.

Advanced computational tools are being deployed to accelerate QNN design. Reinforcement learning, utilising self-attention transformer agents, automates the search for highly expressive architectures, bypassing the need for manual design. Furthermore, concepts from quantum information theory, such as Fisher information, provide a rigorous framework for analysing QNN capabilities and limitations. Fisher information quantifies the amount of information that a probabilistic model, such as a QNN, provides about an unknown parameter, offering insights into the network’s sensitivity to changes in its parameters and its ability to generalise from data. This integration of theoretical analysis and automated design represents a concerted effort to bridge the gap between the theoretical potential and practical realisation of quantum neural networks.

👉 More information
🗞 Learning to Maximize Quantum Neural Network Expressivity via Effective Rank
🧠 DOI: https://doi.org/10.48550/arXiv.2506.15375

Quantum News

Quantum News

As the Official Quantum Dog (or hound) by role is to dig out the latest nuggets of quantum goodness. There is so much happening right now in the field of technology, whether AI or the march of robots. But Quantum occupies a special space. Quite literally a special space. A Hilbert space infact, haha! Here I try to provide some of the news that might be considered breaking news in the Quantum Computing space.

Latest Posts by Quantum News:

Toyota & ORCA Achieve 80% Compute Time Reduction Using Quantum Reservoir Computing

Toyota & ORCA Achieve 80% Compute Time Reduction Using Quantum Reservoir Computing

January 14, 2026
GlobalFoundries Acquires Synopsys’ Processor IP to Accelerate Physical AI

GlobalFoundries Acquires Synopsys’ Processor IP to Accelerate Physical AI

January 14, 2026
Fujitsu & Toyota Systems Accelerate Automotive Design 20x with Quantum-Inspired AI

Fujitsu & Toyota Systems Accelerate Automotive Design 20x with Quantum-Inspired AI

January 14, 2026