Current quantum computing systems face limitations in qubit numbers, coherence, and error rates, which hinder the execution of complex circuits. Researchers developed a simulation tool to optimise workload distribution across multiple quantum processing units, or QPUs, connected by classical channels. Analysis of four scheduling techniques, including reinforcement learning, reveals trade-offs between runtime, fidelity, and communication costs, demonstrating potential throughput improvements in distributed infrastructures.
The escalating complexity of quantum algorithms increasingly challenges the limitations of current quantum hardware, specifically qubit availability, coherence times and error rates. Effective management of these constraints requires innovative approaches to workload distribution and resource allocation. Researchers at Kent State University and industry leaders from Cisco and Meta address this need by investigating adaptive job scheduling techniques for networked quantum processing units (QPUs). Waylon Luo, Jiapeng Zhao, Tong Zhan and Qiang Guan detail their findings in a study titled ‘Adaptive Job Scheduling in Quantum Clouds Using Reinforcement Learning’, where they present a simulation environment for evaluating scheduling strategies designed to optimise runtime, maintain fidelity and minimise communication overhead in distributed quantum systems. Their analysis compares four distinct approaches, including one leveraging the principles of reinforcement learning, to determine the most effective methods for parallelising and executing complex quantum circuits across multiple processors.
Quantum computing currently faces limitations imposed by qubit numbers, the duration of qubit coherence, and susceptibility to errors, necessitating innovative approaches to expand computational scale beyond single processor capabilities. Distributed quantum computing emerges as a promising pathway, with researchers focusing on strategies that decompose complex quantum circuits and execute them concurrently across networked quantum processing units (QPUs). A novel simulation environment now models a quantum cloud computing platform, enabling exploration of parallelised, noise-aware scheduling techniques designed to optimise performance and mitigate hardware constraints.
The simulation accurately models circuit decomposition, dividing large quantum algorithms into smaller tasks suitable for execution on multiple QPUs connected via real-time classical communication channels. This addresses a critical challenge in near-term quantum computation, where algorithms frequently exceed the capacity of available hardware. By simulating distributed execution, the platform facilitates experimentation with various scheduling strategies and provides insights into their performance under diverse conditions. Crucially, the simulation models communication overhead between QPUs, including latency and bandwidth limitations, essential for evaluating distributed algorithms where communication represents a potential bottleneck.
The simulation accurately models the effects of decoherence, the loss of quantum information, on qubit coherence and gate fidelity, allowing researchers to assess the impact of noise on quantum computations. This enables assessment of the effectiveness of different error mitigation techniques, which aim to reduce the impact of errors without full quantum error correction. The simulation supports a variety of quantum circuit models, including gate-based computation, where quantum algorithms are constructed from a series of quantum gates, and measurement-based quantum computation, which relies on entangled states and single-qubit measurements. It also supports a variety of quantum error correction schemes, including surface codes and topological codes, allowing researchers to assess their effectiveness and associated computational overhead.
Researchers actively investigate the use of reinforcement learning, a type of machine learning where an agent learns to make decisions in an environment to maximise a reward, to optimise scheduling decisions in real-time, adapting to changing system conditions and workload characteristics. They plan to explore different reinforcement learning algorithms and reward functions to identify the most effective approach, and investigate transfer learning, where knowledge gained from solving one problem is applied to a different but related problem, to accelerate agent training. The ultimate goal is to develop a self-optimising scheduling system that can automatically adapt to changing conditions and maximise performance.
Researchers plan to extend the simulation to model more complex quantum error correction schemes, such as concatenated codes, which combine multiple error correction codes to improve performance, and subsystem codes, which offer increased flexibility and efficiency. They also intend to model more complex noise models, such as correlated noise, where errors on different qubits are related, and non-Markovian noise, where the probability of an error depends on the entire history of the system, and use machine learning to predict and mitigate their effects. Future work will focus on validating the simulation results with experiments on real quantum hardware.
This research underscores the importance of holistic system design in quantum computing, requiring not only advancements in qubit technology but also sophisticated software tools and algorithms that effectively manage and coordinate distributed resources. This work contributes to the growing body of knowledge focused on building scalable and reliable quantum computing systems.
The simulation provides a valuable tool for designing and evaluating future quantum computing architectures. This platform will enable researchers to explore a wide range of design options and identify the most promising approaches for building scalable and fault-tolerant quantum computers.
👉 More information
🗞 Adaptive Job Scheduling in Quantum Clouds Using Reinforcement Learning
🧠 DOI: https://doi.org/10.48550/arXiv.2506.10889
