A new quantum residual neural network overcomes limitations of existing models and offers a pathway towards practical quantum machine learning. Amena Khatun and colleagues at The University of Melbourne demonstrates a hardware-efficient architecture implementing residual connections without relying on post-selection, a key advancement in the field. The model achieves comparable accuracies, 99% for binary and 80% for multi-class classifications on benchmark datasets including MNIST, CIFAR, and SARFish, and requires ten times fewer quantum gates than standard variational models. This reduction in complexity is vital for implementation on near-term quantum processors, and the model further exhibits promising adversarial robustness, addressing a key requirement for reliable quantum machine learning applications.
Quantum residual networks enable tenfold reduction in gate complexity for image classification
Gate counts for quantum image classification reduced tenfold using a new quantum residual neural network architecture. This advancement is key because previous variational quantum classifiers required exponentially increasing gates with circuit depth, hindering implementation on near-term quantum processors. Researchers at The University of Melbourne has developed a model achieving 99% accuracy for binary image classification and 80% for multi-class tasks on datasets including MNIST, CIFAR, and SARFish, simultaneously addressing the issue of barren plateaus, a common limitation in training quantum machine learning models.
Consistent learning dynamics observed across diverse datasets, with 80% accuracy achieved on SARFish images, a remote sensing application, using only 200 quantum gates. Analysis of the full-scale MNIST dataset revealed that combining 30 layers of a standard variational quantum classifier with five residual blocks boosted accuracy to 99%, matching previously achieved accuracies but with a reduced gate count. This performance attained with ten times fewer gates than a comparable deep variational circuit, improving hardware efficiency for resource-constrained quantum processors.
The model also demonstrated adversarial durability, retaining accuracy when tested with attacks transferred from classical machine learning models. Quantum machine learning holds potential for image classification and other applications, but realising this requires circuits small enough for current quantum hardware. The model demonstrates a dependence on amplitude encoding to convert classical data into a quantum format, a method that, while effective, could become limiting as dataset sizes increase and more complex data types are used.
Although these results obtained through simulations and do not yet demonstrate performance on actual quantum hardware, where qubit limitations and noise remain substantial hurdles, the model establishes a pathway towards scalable quantum machine learning by addressing key limitations of existing variational models. Deterministic residual connections, implemented via combinations of identity and variational unitaries, allow for fully differentiable training without probabilistic post-selection. This efficiency is vital, as current quantum computers possess limited processing power, meaning fewer gates enable more complex problems to be solved on existing hardware. Achieving comparable accuracy to standard models on image classification tasks, including challenging datasets like SARFish, while requiring ten times fewer quantum gates demonstrates improved hardware efficiency crucial for near-term quantum processors.
The researchers developed a quantum residual neural network achieving 99% accuracy on binary image classification and 80% on multi-class tasks, using datasets including MNIST, CIFAR, and SARFish. This model offers improved hardware efficiency because it requires ten times fewer quantum gates than comparable standard variational models while maintaining similar accuracy. The architecture mitigates a common problem in quantum learning, known as barren plateaus, and also demonstrates robustness against adversarial attacks. This work presents a new approach to building trainable and efficient quantum machine learning models suitable for near-term quantum processors.
👉 More information
🗞 A hardware efficient quantum residual neural network without post-selection
🧠 ArXiv: https://arxiv.org/abs/2604.06866
