Quantum machine learning holds increasing promise as both classical and quantum technologies advance, but a significant hurdle remains: the need for vast amounts of labelled data to train effective models. Liudmila Zhukas, Vivian Ni Zhang, and Qiang Miao, from Duke University, alongside Qingfeng Wang from Tufts University and colleagues, address this challenge by pioneering a new self-supervised pretraining method. Their approach reduces reliance on labelled data by identifying underlying invariances within unlabelled examples, effectively allowing the quantum model to learn from a much wider range of information. The team implemented this technique on a programmable trapped-ion computer, encoding images as quantum states, and demonstrates that this pretraining yields significantly improved image classification accuracy and consistency, particularly when labelled data is scarce. This achievement establishes a label-efficient pathway to quantum representation learning, with implications for analysing complex, naturally occurring datasets and scaling quantum machine learning to handle larger, more realistic inputs.
Quantum Supervised Contrastive Learning for Variational Algorithms
Scientists are advancing quantum machine learning by improving variational quantum algorithms, a promising approach for utilizing near-term quantum computers. They are applying supervised contrastive learning, a technique that enhances a quantum algorithm’s ability to distinguish between different inputs by learning meaningful representations. This research addresses key challenges in quantum computing, the limitations of noisy intermediate-scale quantum (NISQ) devices, and aims to develop algorithms robust to these imperfections. The team successfully implemented a quantum-enhanced supervised contrastive learning architecture, called Q-SupCon, on a 30-qubit trapped-ion quantum computer.
The experiments involved designing quantum circuits to learn representations of input data, with parameters adjusted during training using a stochastic gradient descent algorithm and the Simultaneous Perturbation Gradient Approximation. By encoding data into the quantum state of qubits and controlling the experiment with the ARTIQ framework, researchers demonstrated improved performance on machine learning tasks compared to traditional quantum algorithms, showing increased robustness to noise and the potential to scale to larger quantum computers. This work highlights the use of CAFQA-bootstrapping, a technique to further enhance the performance of the variational quantum eigensolver, the underlying algorithm used to optimize the quantum circuits. The achievement contributes to the growing field of quantum machine learning and bridges the gap toward fault-tolerant quantum computers, holding potential for diverse applications including image recognition, natural language processing, and materials discovery.
Quantum Machine Learning via Self-Supervised Pretraining
Scientists have achieved a breakthrough in quantum machine learning by demonstrating self-supervised pretraining of representations on a programmable trapped-ion computer. This innovative approach significantly reduces the reliance on labeled data, establishing a label-efficient route to quantum representation learning with direct relevance to quantum-native datasets and a clear path to larger classical inputs. Researchers encoded classical images as quantum states and implemented a contrastive learning pipeline entirely on quantum hardware. The team utilized a seven-ion trapped-ion quantum computer, achieving high-fidelity two-qubit and single-qubit gates, enabling the execution of variational circuits with real-time feedback.
Experiments reveal that this two-stage protocol of self-supervised pretraining followed by supervised fine-tuning significantly outperforms traditional supervised training, demonstrating higher accuracy and stability across varying numbers of training samples. The core breakthrough lies in the measurement of pairwise similarity via quantum state overlap, computed directly on the hardware. This approach allows the system to learn essential similarities and differences within the quantum representation of classical datasets, proving effective even with limited labeled data. The results establish a new paradigm for quantum machine learning, paving the way for more efficient and powerful algorithms and mirroring recent advances in classical machine learning to improve generalization to unseen data.
Quantum Pretraining Boosts Image Classification Accuracy
Scientists are pioneering a novel approach to quantum machine learning, achieving label-efficient representation learning through self-supervised pretraining. The team successfully implemented contrastive learning on a programmable trapped-ion computer, encoding classical images as quantum states. By leveraging invariances within unlabeled data, the model learns robust feature representations that significantly improve image classification accuracy, particularly when labeled training data is limited. The core achievement lies in deriving similarity directly from measured quantum state overlaps, executing the entire process, pretraining and classification, on quantum hardware.
The results establish a pathway toward practical quantum machine learning applications, showing that pretraining with unlabeled data enhances performance and reduces the need for extensive labeled datasets. Importantly, the learned invariances generalize beyond the specific images used during pretraining, indicating the model’s ability to extract meaningful features. While the current implementation focuses on relatively small images, the authors acknowledge limitations related to scaling to larger classical inputs and suggest future work will explore methods to address this challenge. This work represents a significant step toward harnessing the potential of quantum computers for complex machine learning tasks, offering a promising alternative to traditional, data-intensive approaches.
👉 More information
🗞 Quantum Machine Learning via Contrastive Training
🧠 ArXiv: https://arxiv.org/abs/2511.13497
