Artificial intelligence operating in ever-changing environments demands the ability to learn continuously, yet conventional deep learning systems struggle with a critical flaw: the loss of plasticity, where networks progressively forget previously learned information when exposed to new data. Yu-Qin Chen from the Graduate School of China Academy of Engineering Physics, Shi-Xin Zhang from the Institute of Physics, Chinese Academy of Sciences, and colleagues now demonstrate that quantum neural networks naturally avoid this limitation, retaining their learning capacity over extended periods. The team systematically tested this advantage across a wide range of tasks, encompassing both supervised and reinforcement learning, and utilising diverse data types from standard images to quantum-native datasets. This research reveals that, unlike classical models which suffer performance decline due to uncontrolled weight and gradient expansion, quantum networks maintain consistent performance, suggesting that quantum computing offers a fundamentally robust pathway towards creating truly adaptive and lifelong learning artificial intelligence.
Standard deep learning, however, suffers from catastrophic forgetting, a phenomenon where networks gradually lose plasticity and struggle to adapt to changing circumstances. This research introduces a novel approach to preserving plasticity in continual quantum learning, leveraging the unique properties of quantum systems to overcome this limitation. The team demonstrates that by incorporating quantum mechanisms, networks can maintain a greater degree of plasticity, enabling more effective continual learning and robust adaptation to evolving environments. This work contributes a new understanding of how to design continual learning systems that are resilient to catastrophic forgetting.
Classical learning models often lose their ability to learn from new data over time. Here, researchers show that quantum learning models naturally overcome this limitation, preserving plasticity over long timescales. They demonstrate this advantage systematically across a broad spectrum of tasks, including supervised learning and reinforcement learning, and diverse data types, from classical high-dimensional images to quantum-native datasets. While classical models exhibit performance degradation correlated with unbounded weight and gradient growth, quantum neural networks maintain consistent learning capabilities regardless of the data or task presented. The origin of this advantage lies in the intrinsic.
Quantum Networks Preserve Plasticity Superiorly
This research provides comprehensive evidence supporting the claim that quantum neural networks exhibit superior plasticity compared to classical neural networks. This advantage stems from the compactness of the parameter space in quantum networks, not simply bounding outputs. The authors demonstrate this through experiments across diverse domains, including image recognition and reinforcement learning.
Across all experiments, quantum networks show a stable learning curve with minimal performance degradation over long task sequences. Classical networks consistently exhibit catastrophic forgetting, with performance dropping significantly as new tasks are introduced. Using periodic activation functions in classical networks doesn’t prevent forgetting; the issue is the unbounded growth of weights. Quantum neural networks have far fewer trainable parameters than classical networks, contributing to their stability and preventing overfitting.
The authors provide statistical analysis to demonstrate the robustness of their findings, and present standard deviation bands around performance curves to show the consistency of quantum learning. Detailed control experiments demonstrate that simply bounding the output of a classical network is insufficient to preserve plasticity. The detailed experimental setup and results contribute to the reproducibility and transparency of the research.
Quantum Neural Networks Preserve Continuous Learning
This research demonstrates that neural networks, unlike their classical counterparts, effectively maintain their ability to learn continuously over extended periods and across diverse tasks. The team systematically evaluated both classical deep learning models and quantum neural networks on benchmarks spanning image recognition and reinforcement learning, revealing a significant advantage for quantum models in preserving plasticity, the capacity to incorporate new information without forgetting previously learned knowledge. Classical networks exhibited performance decline linked to increasing weight and gradient magnitudes, whereas quantum networks sustained consistent performance regardless of the task or data presented.
Researchers attribute this advantage to the inherent physical constraints within quantum models, specifically the unitary constraints that limit optimization to a more manageable space. Furthermore, the findings indicate that quantum neural networks achieve comparable or superior performance with substantially fewer trainable parameters than classical models, suggesting a potential for more efficient and scalable artificial intelligence.
👉 More information
🗞 Intrinsic preservation of plasticity in continual quantum learning
🧠 ArXiv: https://arxiv.org/abs/2511.17228
