Has Google finally proved Quantum Supremacy?

Has Google Finally Proved Quantum Supremacy?

Google has claimed in a recent publication that it has achieved quantum supremacy. That is where a quantum algorithm outperforms a classical algorithm for the same task. Observers of quantum milestones and events will have noticed that this has been announced before, back in 2019 (Quantum Supremacy using a Programmable Superconducting Processor). However, that announcement was mired in controversy, with researchers claiming that the result was, in fact, a non-result as a supercomputer could perform the task in a more reasonable time frame nullifying the result. Google’s latest publication from the team at Google Quantum AI claims that the quantum result is beyond the capabilities of even the most influential classical supercomputers.

We estimate the computational cost against improved classical methods and demonstrate that our experiment is beyond the capabilities of existing classical supercomputers.

Google Quantum AI on their recent paper “Phase transition in Random Circuit Sampling”

Quantum Advantage or Quantum Supremacy

Both mean fundamentally the same; you may see them written about simultaneously. Although latterly, there has been a push for the term Quantum Advantage. What is Quantum Advantage? The point at which quantum computers can solve problems faster or more efficiently than classical computers can. More specifically, it refers to a quantum computer performing a task or computation that, practically speaking, a classical computer cannot do in a reasonable amount of time.

John Preskill, a prominent theoretical physicist at the California Institute of Technology, is known for coining “quantum supremacy.” He defined it in a paper from 2012 as the point where “quantum devices can perform tasks surpassing what can be done in the classical world.”

Google first made waves with its 2019 publication: “Quantum Supremacy Using a Programmable Superconducting Processor”. Here is an extract of that paper. Measurements from repeated experiments sample the corresponding probability distribution, which we verify using classical simulations. While our processor takes about 200 seconds to sample one instance of the quantum circuit 1 million times, a state-of-the-art supercomputer would require approximately 10,000 years to perform the equivalent task. This dramatic speedup relative to all known classical algorithms provides an experimental realization of quantum supremacy on a computational task and heralds the advent of a much-anticipated computing paradigm.

Phase transition in Random Circuit Sampling: the 2023 result

The new article has referenced the past result, but the new development from Google uses more qubits than the 53 used in the 2019 result. Researchers have estimated the new computation’s conventional classical computing cost or computation time. They suggest that traditional supercomputers would take over 47 years for the recent result on 70 qubits. Of course, researchers have only simulated how long it would take due to the massive timeframes involved!

Results, as Google found four years ago, can be notoriously tricky to interpret, despite the large number of people involved in Google’s work. Computational Complexity is where researchers aim to unpick results such as these. The question is whether an efficient classical algorithm remains undelivered, which would perform the workload quicker than that estimated at 47 years. It has been the case that efficient classical algorithms have been found. Researchers such as Scott Aaronson have been quiet on the news but vocal about various quantum claims.

One of the drawbacks and criticism of this and early work in 2019 is that whilst it may have been claimed that the results such quantum advantage, the workload has no practical function. Unlike parallelising an existing algorithm, such as rendering a scene for a movie, for example, which can be sped up with a GPU, there is often no practical use for these algorithms. The recent result does posit “Certified randomness generation” as one application.

The limited usefulness so far of these results need not be too discouraging since there are firm fundamentals which show (at least theoretically) the utility of algorithms such as Deutsch-Josza, Shor and Grover, which theorists have verified, and these algorithms have had their tyres sufficiently kicked for decades. Shor’s algorithm could allow rapid factorisation of numbers used in cryptography, and Grover’s algorithm is thought to enable faster searching. But algorithms need the implementation to do something useful, which means converting classical data into a quantum representation. Oh, there is the need to store results, but QRAM doesn’t exist.

The Quantum Computing field needs killer applications. The hope is that they will eventually come; researchers will get to devices of millions of qubits, error correction will be perfected, and massive functional quantum circuits will be run. But right now, results from Google, whilst impressive, should not be over-hyped or understated. It represents another “data point” in the case of quantum computing.