Understanding the capabilities of quantum neural networks is a central challenge in modern machine learning, and researchers are actively investigating whether these networks can outperform their classical counterparts. Anderson Melchor Hernandez, Davide Pastorello, and Giacomo De Palma, all from the Dipartimento di Matematica at the Universit`a di Bologna, have developed a new method for efficiently calculating a key property of these networks, known as the Neural Tangent Kernel. Their work demonstrates that, for a broad class of quantum neural networks, this kernel can be computed using classical computers, revealing a fundamental limitation on the potential for quantum advantage in this area. By simplifying the complex calculations required to define the kernel, the team’s approach provides a crucial tool for assessing the practical power of quantum machine learning models and suggests that achieving a significant speedup over classical methods may be more difficult than previously thought.
This kernel describes how a network behaves during training and helps understand its ability to generalise to new data. Calculating this kernel exactly is often computationally demanding for large networks, motivating the development of faster approximation techniques. This work focuses on quantum neural networks, which potentially offer advantages in learning, but also present challenges in kernel calculation due to the complexity of quantum circuits. These specific values correspond to operations that can be efficiently simulated on classical computers. This reduction, combined with recent findings linking different analytical approaches, dramatically simplifies the computational complexity of analysing these networks
Barren Plateaus Limit Quantum Neural Network Advantage
This research investigates whether quantum neural networks can be efficiently simulated on classical computers. If a quantum neural network is classically simulable, it suggests any potential advantage over classical networks may be illusory. The study explores the relationship between barren plateaus, a phenomenon where training becomes ineffective due to vanishing gradients, and the expressibility of quantum neural networks. It demonstrates that simply avoiding barren plateaus does not guarantee a quantum network is not classically simulable. The research relies on rigorous mathematical proofs and bounds, utilising tools from random matrix theory, probability, and functional analysis. The method focuses on networks built from specific types of quantum circuits, leveraging a mathematical simplification that allows averaging over only four discrete values instead of the entire distribution of initial parameters. This reduction significantly improves computational efficiency, enabling classical computers to accurately predict the expected output of these complex networks. The authors acknowledge that their analysis relies on specific assumptions about the network architecture and depth, particularly that the depth grows at most logarithmically with the number of qubits. Future work could explore the implications of these findings for different network structures and investigate whether alternative training methods might overcome these limitations.
👉 More information
🗞 Efficient classical computation of the neural tangent kernel of quantum neural networks
🧠 ArXiv: https://arxiv.org/abs/2508.04498
