Quantum Circuits Boost Machine Learning with Fewer Parameters Needed

A new framework, parameter-efficient quantum multi-task learning, tackles the challenge of efficiently learning multiple complex tasks simultaneously. Hevish Cowlessur and colleagues at University of Melbourne utilise variational quantum circuits to sharply reduce the number of parameters needed for task-specific predictions. The approach replaces conventional classical prediction heads with a fully quantum alternative, offering linear scaling of parameters as the number of tasks increases, in contrast to the quadratic growth of standard classical models. Evaluations across diverse benchmarks, including natural language processing and medical imaging, demonstrate comparable or superior performance to existing methods, alongside a substantial reduction in model size and successful implementation on both simulated and real quantum hardware.

Linear scaling in quantum multitask learning overcomes classical quadratic parameter limitations

Dr. Joseph Bowles and Dr. Patrick Coles have developed a new quantum head that reduces task-specific parameters by a factor of twelve compared to classical hard-parameter-sharing methods. Quadratic parameter growth previously limited the scalability of multitask learning, and this represents a major advance. Multitask learning (MTL) is a machine learning paradigm where a single model learns to perform multiple related tasks concurrently, leveraging shared representations to improve generalisation and data efficiency. Traditional MTL approaches, particularly those employing hard-parameter-sharing, utilise a shared backbone network followed by task-specific prediction heads. However, the number of parameters within these task-specific heads grows quadratically with the number of tasks, creating a significant bottleneck for scalability. The achieved linear scaling, utilising variational quantum circuits within a fully quantum prediction head, overcomes a fundamental barrier in classical multitask learning where parameter numbers increase disproportionately with each new task. This is particularly crucial as datasets grow, and computational resources become strained.

By decoupling task-independent quantum encoding from lightweight, task-specific subcircuits within a single circuit, the framework enables localised adaptation without excessive parameter demands. The initial stage involves encoding input data into a quantum state, a process that remains task-agnostic and is shared across all tasks. Subsequently, this shared quantum representation is processed by a series of task-specific subcircuits, each designed to extract features relevant to a particular task. These subcircuits are constructed using variational quantum circuits (VQCs), which are programmable quantum processors capable of mapping data into complex, high-dimensional Hilbert spaces. The VQC architecture allows for efficient parameterisation and optimisation, enabling the model to learn task-specific nuances without incurring a quadratic increase in parameters. Evaluations across natural language processing, medical imaging, and multimodal data benchmarks revealed performance matching, and sometimes exceeding, existing classical approaches. A component responsible for interpreting shared information, the prediction head, benefited from a reduction of up to twelve times fewer task-specific parameters. This reduction in parameters not only improves computational efficiency but also reduces the risk of overfitting, particularly when dealing with limited training data.

This parameter efficiency was further validated through successful implementation on both noisy simulators and actual quantum hardware, demonstrating feasibility on current devices. The experiments were conducted using both state-of-the-art quantum simulators and real quantum processing units (QPUs) provided by IBM Quantum. While current QPUs are limited in terms of qubit count and coherence times, the successful execution of the algorithm on these devices demonstrates the potential for near-term quantum advantage in multitask learning. The use of error mitigation techniques was crucial in mitigating the effects of noise on the QPU, ensuring reliable results. Despite achieving promising results on simulators and limited quantum hardware, the framework’s reliance on a ‘controlled and capacity-matched formulation’ introduces a key constraint. Carefully tailoring the shared quantum representation to each specific task combination is essential, as a misaligned formulation could negate the benefits of parameter efficiency and hinder performance. This ‘capacity-matched formulation’ refers to the careful selection of the dimensionality of the shared quantum representation to ensure it can adequately capture the information required for all tasks without becoming overly complex or redundant.

While the linear scaling of parameters offers a clear advantage over classical methods, it’s predicated on this careful design, raising questions about its strong implementation and ease of use across diverse, real-world multitask scenarios. The process of determining the optimal shared quantum representation and task-specific subcircuits requires careful consideration and potentially significant computational resources for hyperparameter tuning. Further research is needed to develop automated methods for designing these formulations, reducing the reliance on expert knowledge and simplifying the implementation process. Compared to the quadratic growth seen in traditional methods, the linear scaling of parameters offers a significant potential advantage as task numbers increase, even while demanding careful design. Successful implementation on both simulated and real quantum hardware further validates the feasibility of this hybrid quantum-classical approach, despite current technological limitations. The hybrid approach leverages the strengths of both classical and quantum computing, utilising classical resources for data pre-processing and post-processing, while offloading the computationally intensive task of parameter learning to the quantum processor.

A parameter-efficient quantum multi-task learning framework, scaling favourably against classical methods, has been demonstrated. This hybrid quantum-classical approach successfully ran on both simulated and real quantum hardware, paving the way for more complex algorithms. Future development will begin to explore encoding strategies and expand task capabilities further. The model achieves comparable performance to existing methods with sharply fewer parameters by utilising fully quantum prediction heads, which replace conventional components to improve efficiency in multi-task learning, a technique allowing computers to learn several jobs concurrently. By employing variational quantum circuits, or VQCs, programmable quantum processors mapping data into complex mathematical spaces, the parameter count scales linearly with the number of tasks, a substantial improvement over the quadratic growth seen in standard classical approaches. Investigating alternative quantum encoding schemes, such as amplitude encoding or angle encoding, could further enhance the performance and efficiency of the framework. Expanding the range of tasks to include more complex and diverse applications, such as reinforcement learning and generative modelling, will be crucial for demonstrating the full potential of this approach. The ultimate goal is to develop a versatile and scalable quantum multi-task learning framework that can address a wide range of real-world problems.

The researchers demonstrated a new parameter-efficient quantum multi-task learning framework that scales favourably compared to classical methods. This hybrid quantum-classical approach utilises variational quantum circuits to create compact representations, allowing a computer to learn multiple tasks simultaneously with fewer parameters. Specifically, the quantum prediction head exhibited linear parameter growth as the number of tasks increased, unlike the quadratic growth observed in standard classical heads. The model was successfully tested on both simulated and real quantum hardware, and future work will focus on exploring encoding strategies and expanding task capabilities.

👉 More information
🗞 Parameter-efficient Quantum Multi-task Learning
🧠 ArXiv: https://arxiv.org/abs/2604.13560

Muhammad Rohail T.

Latest Posts by Muhammad Rohail T.: