Quantum computing faces a growing challenge as the field diversifies, with benchmarking methods becoming increasingly fragmented and difficult to compare across different hardware platforms. Neer Patel, Anish Giri, and Hrushikesh Pramod Patil, along with colleagues, address this issue by introducing a new, modular architecture for quantum benchmarking. This system separates the processes of creating benchmark problems, running circuits, and analysing results into independent components, allowing them to work together seamlessly. The team demonstrates the versatility of their approach by integrating with existing software tools and even creating new benchmarks, such as one based on quantum reinforcement learning, and validating it across multiple computing environments. By establishing standardised interfaces, this work significantly reduces fragmentation within the quantum computing ecosystem, paving the way for more meaningful comparisons of performance and accelerating progress in the field.
Scientists have developed a modular architecture to address fragmentation within quantum computing benchmarking, successfully decoupling problem generation, circuit execution, and results analysis into independent, interoperable components. The system currently supports over 20 benchmark variants, ranging from simple algorithmic tests to complex Hamiltonian simulations, demonstrating broad applicability across diverse quantum algorithms. This architecture integrates seamlessly with multiple circuit generation APIs, including Qiskit, CUDA-Q, and Cirq, enabling flexible workflows and customized benchmarking approaches. Researchers validated the architecture through successful integration with Sandia’s pyGSTi for advanced circuit analysis and CUDA-Q for multi-GPU High Performance Computing simulations, confirming its compatibility with established quantum computing tools.
The system’s extensibility is demonstrated by the implementation of dynamic circuit variants of existing benchmarks and a new Quantum Reinforcement Learning benchmark, making these readily available across multiple execution and analysis modes. This work represents a key enhancement to the QED-C Application-Oriented Performance Benchmarks for Quantum Computing suite, providing a unified framework for evaluating quantum systems. Experiments confirm the architecture’s ability to integrate with backend quantum computing systems, including IBM Quantum and NERSC Perlmutter NVIDIA GPU simulators, showcasing its practical utility. The modular design allows users to leverage any combination of components, whether utilizing the complete integrated suite or integrating individual components with external tools, providing unparalleled flexibility. This architecture facilitates more comprehensive quantum system evaluation by reducing barriers between different benchmarking methodologies and enabling a standardized approach to performance assessment.
Modular Benchmarking Architecture For Quantum Computers
This research presents a new, modular architecture designed to address fragmentation within the field of quantum computing benchmarking. By decoupling the processes of problem generation, circuit execution, and results analysis into independent components, the team has created a system that supports a wide range of benchmark tests, from basic algorithmic checks to complex Hamiltonian simulations. The architecture successfully integrates with existing circuit generation tools and has been validated through collaborations with Sandia National Laboratories and utilising multi-GPU high-performance computing simulations. A key achievement lies in the formalisation of standardised interfaces that enable interoperability between previously incompatible benchmarking frameworks, while still allowing for optimisation flexibility.
The team demonstrated the extensibility of this system by implementing dynamic circuit variations and introducing a novel quantum reinforcement learning benchmark, both readily adaptable across different execution and analysis modes. Experiments utilising IBM quantum processors reveal performance characteristics of quantum reinforcement learning ansatzes, showing a linear decrease in fidelity with increasing qubit count, alongside approximately constant execution times. Further analysis demonstrates the impact of different optimisation algorithms on performance metrics during reinforcement learning tasks. This research contributes a valuable framework for more consistent and comparable quantum computing benchmarks, ultimately accelerating progress in the field.
Ansatz and Optimiser Performance Benchmarking
Scientists have developed a framework for benchmarking quantum computers specifically designed to assess their performance on Reinforcement Learning tasks. The system employs a modular design, allowing for easy addition of new environments, optimizers, and noise models. The framework utilises two primary methods for evaluation: characterising the performance of parameterized quantum circuits, known as ansatzes, and implementing a Quantum Deep Q-Network to solve the FrozenLake environment, a standard grid-world task. Performance is assessed by measuring the success rate, episode return, and resource usage of the Q-DQN.
The team evaluated the Q-DQN implementation using an ansatz consisting of multiple layers of quantum rotations, and trained the network using an epsilon-greedy strategy and a replay buffer to improve stability. They employed both SPSA and ADAM optimizers to update the parameters of the quantum circuit, and incorporated realistic noise models to simulate the imperfections of real quantum hardware. The team meticulously tracked per-step metrics like circuit evaluations and environment interactions, alongside aggregate metrics such as total steps, episodes completed, and average return. They also measured the time required for quantum execution, environment interaction, and gradient evaluation. This comprehensive approach provides a detailed understanding of quantum computer performance on Reinforcement Learning tasks, paving the way for further optimisation and development of quantum algorithms.
👉 More information
🗞 Platform-Agnostic Modular Architecture for Quantum Benchmarking
🧠 ArXiv: https://arxiv.org/abs/2510.08469
