A new qubit-mapping approach called QARMA-R has achieved a 97-100% reduction in intercore operation, minimizing a primary source of noise and decoherence in modular quantum systems. Researchers at Pukyong National University in South Korea developed QARMA-R, which leverages attention-based deep reinforcement learning to optimize the physical layout and operation of quantum circuits. Experimental results demonstrate QARMA-R reduces intercore communication by 86% on average, outperforming highly optimized configurations within the Qiskit framework. This advance, detailed in Physics Applied, enables the execution of larger quantum algorithms on limited hardware and contributes to the development of scalable quantum computing architectures.
QARMA-R Achieves 86% Average Intercore Communication Reduction
This performance improvement surpasses existing tools for managing quantum information flow between processing units. The innovation combines attention-based mechanisms with graph neural networks to learn optimal qubit allocation, routing, and reuse strategies; QARMA-R incorporates dynamic qubit reuse to further enhance efficiency. QARMA itself maintains a 15% and 40% improvement for larger circuits without reuse, while achieving a 97-100% reduction in intercore operation when compared to traditional modular qubit mapping. These improvements are critical for scaling quantum computing architectures, as costly intercore operations and quantum state transfers currently limit the size and complexity of algorithms that can be executed.
Modular quantum computing, while promising scalability, currently faces limitations imposed by the physical constraints of interconnecting multiple quantum processing units. Existing methods for assigning qubits to physical locations, known as mapping, struggle with the substantial overhead of communication between these processing units, a major source of error and decoherence. This application of artificial intelligence optimizes the physical arrangement and operation of quantum circuits, rather than solely the algorithms themselves.
