Researchers Define Quantum Computing Benchmarks with a Novel Taxonomy and Systematic Literature Review

Quantum computing promises revolutionary advances, but measuring progress in this rapidly evolving field presents a significant challenge. Tobias Rohe, Federico Harjes Ruiloba, and Sabrina Egger, from the Institute for Computer Science at LMU Munich, alongside colleagues, address this need with a comprehensive review of quantum computing benchmarks. Their work systematically analyses existing benchmarking approaches, revealing patterns and gaps in current methods. The team develops a novel taxonomy that categorises benchmarks by hardware, software, and application focus, offering a unified picture of the field and establishing standard terminology. This research provides a crucial foundation for more coherent benchmark development, fairer evaluation of quantum computers, and stronger collaboration between researchers and stakeholders.

Quantum computing (QC) continues to evolve rapidly in both hardware and software, yet measuring progress within this complex and diverse field remains a significant challenge. To effectively track this advancement, identify existing bottlenecks, and comprehensively evaluate community efforts, benchmarks play a crucial role. Researchers therefore conducted a systematic literature review, combining natural language processing-based clustering with expert analysis, to develop a novel taxonomy and definitions for QC benchmarks, providing a clearer framework for evaluating progress and guiding future development.

Benchmarking Quantum Algorithms for Optimisation and Simulation

Current research encompasses a comprehensive list of references related to quantum computing, machine learning, and related fields. A significant portion focuses on algorithms designed to solve specific problems more efficiently than classical computers, including the HHL algorithm for solving linear equations, QAOA for combinatorial optimisation, and VQE for determining molecular ground state energies. Researchers also investigate quantum error correction and mitigation techniques to address the inherent noise in quantum hardware, alongside methods for benchmarking quantum logic operations and assessing hardware sensitivity. The field of quantum machine learning (QML) is actively explored, with studies focusing on quantum neural networks, quantum kernels, and generative models.

Researchers are leveraging quantum computation to speed up support vector machines and developing methods for data encoding suitable for quantum algorithms. Alongside these quantum approaches, classical machine learning techniques, such as scikit-learn and dimensionality reduction methods like UMAP and HDBSCAN, provide a baseline for comparison. A strong emphasis exists on application-oriented benchmarking, focusing on metrics like accuracy, speedup, and scalability. Researchers aim to determine whether quantum algorithms can outperform classical algorithms on specific tasks, utilising circuit learning and debugging tools to improve performance. Emerging trends include frameworks like QUARK for benchmarking quantum generative learning, and the application of large language models to assist with quantum algorithm design and data analysis. Machine learning plays an increasingly important role in assisting with quantum algorithm design, data analysis, and error mitigation.

Robust Quantum Benchmarks Taxonomy and Characteristics

Researchers have undertaken a comprehensive analysis of quantum computing benchmarks, developing a novel taxonomy to address the field’s rapidly evolving landscape and diverse stakeholder perspectives. This work systematically organises benchmarks into categories focused on hardware, software, and applications, creating a hierarchical classification system that clarifies the benchmarking landscape. The resulting taxonomy identifies six key characteristics of robust benchmarks: relevance, transparency. Relevance demands that benchmarks accurately measure intended performance aspects, while fairness ensures unbiased cross-system comparisons.

Reproducibility requires consistent results across repeated tests, and usability focuses on ease of adoption and cost-effectiveness. Scalability is crucial for accommodating the progression from today’s small quantum devices to future large-scale computers, and transparency ensures that metrics are understandable and verifiable. This research demonstrates the importance of carefully considering these characteristics when designing and evaluating quantum computing systems. The team highlights that benchmarks must not only measure performance but also facilitate meaningful comparisons and track progress over time. By structuring the field and providing a common language, this work establishes a foundation for coherent benchmark development, fairer evaluation, and stronger collaboration within the quantum computing community.

Holistic Quantum Benchmarking For Stack-Wide Evaluation

This research presents a structured classification of quantum computing benchmarks, organised by hardware, software, and application levels, and aligned with the needs of different stakeholders within the field. The analysis demonstrates that a comprehensive evaluation of quantum computers requires multiple, purpose-specific benchmarks, as no single method can fully assess performance, and that progress at one level must be considered in relation to others. By establishing a common language and framework, this work aims to improve communication and collaboration within the quantum computing community, encouraging a more holistic, ‘stack-wide’ approach to evaluation. The study acknowledges certain limitations, including a focus on gate-based quantum computing, potentially excluding other paradigms like quantum annealing. Future research should extend the taxonomy to encompass a wider range of quantum computing approaches, and continued efforts are needed to validate progress across the entire quantum stack, from low-level hardware metrics to high-level application outcomes, to ensure claims of advancement are rigorously scrutinised.

👉 More information
🗞 Quantum Computer Benchmarking: An Explorative Systematic Literature Review
🧠 ArXiv: https://arxiv.org/abs/2509.03078

Quantum News

Quantum News

As the Official Quantum Dog (or hound) by role is to dig out the latest nuggets of quantum goodness. There is so much happening right now in the field of technology, whether AI or the march of robots. But Quantum occupies a special space. Quite literally a special space. A Hilbert space infact, haha! Here I try to provide some of the news that might be considered breaking news in the Quantum Computing space.

Latest Posts by Quantum News:

Scientists Guide Zapata's Path to Fault-Tolerant Quantum Systems

Scientists Guide Zapata’s Path to Fault-Tolerant Quantum Systems

December 22, 2025
NVIDIA’s ALCHEMI Toolkit Links with MatGL for Graph-Based MLIPs

NVIDIA’s ALCHEMI Toolkit Links with MatGL for Graph-Based MLIPs

December 22, 2025
New Consultancy Helps Firms Meet EU DORA Crypto Agility Rules

New Consultancy Helps Firms Meet EU DORA Crypto Agility Rules

December 22, 2025