Quantum Advantage Measurement: Study Reveals Limitations of Linear CrossEntropy Benchmark

Quantum advantage, the computational power of a quantum device surpassing classical devices, is a significant milestone in quantum technology. Current efforts to demonstrate this advantage focus on sampling problems, using a benchmark to measure how closely the sampled distribution of a quantum device matches the ideal target distribution. The linear cross-entropy benchmark (XEB) is one such measure. However, a recent study has highlighted limitations of the XEB, including its vulnerability to exploitation by classical algorithms. The study suggests that further research is needed to refine methods of measuring quantum advantage, underscoring the complexity of quantum computing.

What is Quantum Advantage and How is it Measured?

Quantum advantage refers to the computational power of a quantum device that surpasses any existing classical devices. This demonstration is significant as it not only marks a milestone in quantum technology but also challenges the extended Church-Turing thesis, which is central to computational complexity theory. A straightforward way to demonstrate quantum advantage would be to run a quantum algorithm such as Shor’s integer factoring for problems whose size is too large to be solved by any known algorithm running on classical computers. However, this would require a quantum device with a large number of near-perfect qubits, which is beyond the capabilities of existing technology.

State-of-the-art quantum devices consist of several dozens of imperfect qubits. Even the exploration of a potential scaling advantage requires larger systems consisting of at least several hundred coherent qubits. Instead of implementing such quantum algorithms, most of the current efforts towards demonstrating quantum advantage have focused on sampling problems, which are well suited for near-term quantum devices. In these problems, one is asked to produce a sequence of random bitstrings drawn from a certain probability distribution.

A natural choice of a distribution that would be challenging for a classical computer to reproduce is one based on a highly entangled manybody wave function. It has been shown that for a wide class of quantum states, exact sampling by classical computers is intractable under plausible assumptions.

How is the Linear CrossEntropy Benchmark Used?

To demonstrate quantum advantage using an actual sampling experiment, one needs to introduce a benchmark that measures how close the sampled distribution of a quantum device is to the ideal target distribution. The idea is to show that the samples from the quantum device achieve high values, indicating good correlation with the ideal distribution, while presenting evidence that there does not exist an efficient classical algorithm that can produce samples achieving comparable values.

If the difference between the classical and quantum resources needed to achieve a certain value of the benchmark scales exponentially with the system size, this demonstrates that quantum devices have an exponential computational advantage even in the regime where the gates are too noisy to allow for quantum error correction. A prominent example of such a benchmark is the linear cross-entropy benchmark (XEB), which is defined as a measure of how much mass a sampled distribution places on elements whose probability is higher than the median in the ideal distribution. A non-vanishing value of XEB is taken to mean that the sampled distribution is correlated with the ideal one.

What are the Limitations of the Linear CrossEntropy Benchmark?

The study critically examines the notion of using the linear cross-entropy benchmark (XEB) to certify quantum advantage. First, it considers a benign setting where an honest implementation of a noisy quantum circuit is assumed and characterizes the conditions under which the XEB approximates the fidelity of quantum dynamics. Second, it assumes an adversarial setting where all possible classical algorithms are considered for comparisons and shows that achieving relatively high XEB values does not imply faithful simulation of quantum dynamics.

Specifically, the study presents an efficient classical algorithm that achieves high XEB values, namely 51.2% of those obtained in the state-of-the-art experiments within just a few seconds using a single GPU machine. This is made possible by identifying and exploiting several vulnerabilities of the XEB, which allows the researchers to achieve high XEB values without simulating a full quantum circuit. Remarkably, the algorithm features better scaling with the system size than a noisy quantum device for commonly studied random circuit ensembles in various architectures.

How Can the Limitations of the Linear CrossEntropy Benchmark be Overcome?

The study quantitatively explains the success of the classical algorithm and the limitations of the XEB by using a theoretical framework in which the dynamics of the average XEB and fidelity are mapped to classical statistical mechanics models. Using this framework, the researchers illustrate the relation between the XEB and the fidelity for quantum circuits in various architectures with different choices of gate sets and in the presence of noise.

The results demonstrate that XEB’s utility as a proxy for fidelity hinges on several conditions, which should be independently checked in the benign setting but cannot be assumed in the general adversarial setting. Therefore, the XEB on its own has limited utility as a benchmark for quantum advantage. The study briefly discusses potential ways to overcome these limitations, suggesting that further research and development are needed to refine the methods of measuring quantum advantage.

In conclusion, while the linear cross-entropy benchmark (XEB) has been a useful tool in the quest to demonstrate quantum advantage, this study highlights its limitations and the need for more robust and reliable measures. The findings underscore the complexity of quantum computing and the challenges that lie ahead in fully harnessing its potential.

Publication details: “Limitations of Linear Cross-Entropy as a Measure for Quantum Advantage”
Publication Date: 2024-02-29
Authors: Xun Gao, Marcin Kalinowski, Chi-Ning Chou, Mikhail D. Lukin, et al.
Source: PRX Quantum 5, 010334
DOI: https://doi.org/10.1103/PRXQuantum.5.010334

Quantum News

Quantum News

As the Official Quantum Dog (or hound) by role is to dig out the latest nuggets of quantum goodness. There is so much happening right now in the field of technology, whether AI or the march of robots. But Quantum occupies a special space. Quite literally a special space. A Hilbert space infact, haha! Here I try to provide some of the news that might be considered breaking news in the Quantum Computing space.

Latest Posts by Quantum News:

MIT Research Reveals Cerebellum’s Role in Language Network, Expanding Brain Mapping

MIT Research Reveals Cerebellum’s Role in Language Network, Expanding Brain Mapping

February 6, 2026
ETH Zurich Researchers Achieve "Surgery" on Qubits, Advancing Quantum Error Correction

ETH Zurich Researchers Achieve “Surgery” on Qubits, Advancing Quantum Error Correction

February 6, 2026
Infleqtion Develops Hyper-RQAOA Quantum Routine for Real-World Cancer Biomarker Analysis in Phase 3 Trial

Infleqtion Develops Hyper-RQAOA Quantum Routine for Real-World Cancer Biomarker Analysis in Phase 3 Trial

February 6, 2026