British Quantum Company Riverlane in Fujitsu Quantum Simulator Challenge, Advancing Quantum Error Correction

British Quantum Company Riverlane In Fujitsu Quantum Simulator Challenge, Advancing Quantum Error Correction

Riverlane has been recognized in the Fujitsu Quantum Simulator Challenge for their work on simulating a quantum noise phenomenon known as leakage. This understanding is crucial for quantum error correction, a set of techniques that correct errors in quantum calculations. Riverlane used Fujitsu’s quantum circuits simulator to simulate quantum error correction codes. The team also studied a leakage-reduction method called “wiggling” introduced by Google. The project, which ran over summer 2023, involved a multidisciplinary team from Riverlane and the computational resources of Fujitsu. The Fujitsu Quantum Simulator Challenge ran from February to September 2023.

Riverlane’s Achievement in Fujitsu Quantum Simulator Challenge

Riverlane, a quantum computing company, has been recognized as the second-place winner in the Fujitsu Quantum Simulator Challenge. The competition focused on simulating a quantum noise phenomenon known as leakage, which is crucial to understanding and addressing quantum error correction, a key step towards unlocking the full potential of quantum computing.

Quantum bits, or qubits, are the fundamental units of quantum computers. However, they are highly susceptible to noise, which can disrupt their ability to perform useful quantum calculations. Quantum error correction techniques are designed to correct these errors before they can compromise the calculation. The goal is to enable quantum computers to run approximately a trillion reliable quantum operations, or a TeraQuop, which is currently unachievable without quantum error correction.

The Role of Quantum Error Correction Codes and Simulators

The challenge required participants to use Fujitsu’s quantum circuits simulator, the fastest in the world, to simulate various quantum error correction codes. These codes use multiple noisy qubits to create more reliable logical qubits.

Classical simulators, while unable to match the computational advantage of quantum computers, are essential tools in the development of quantum computers. They allow for the testing of small-scale quantum algorithms and the simulation of quantum error correction protocols.

Benchmarks and Experiments in Quantum Error Correction

Several benchmarks are used to test readiness for quantum error correction. Memory experiments, which measure how well logical observables are preserved over time, are a well-established benchmark. These experiments essentially measure the lifespan of a qubit.

Stability experiments, another benchmark, assess how well logical observables are preserved spatially. Unlike memory experiments, where performance improves with the size of the experiment, stability experiments see performance enhancements when the duration of the experiment is increased. This allows for good performance with fewer qubits. For this project, Riverlane ran simulations of the stability experiment.

Riverlane’s Approach to Quantum Noise and Leakage

Riverlane’s simulation utilized a fully quantum-mechanical noise model, which included leakage. Leakage is a particularly harmful type of noise that removes qubits from the computational space, meaning the state of a qubit no longer exists between the |0⟩ and |1⟩ states and could instead leak into a |2⟩ state. This necessitated the simulation of qutrits (or quantum trits), units of quantum information where the state could be a |0⟩, |1⟩, or |2⟩ or any superposition of these three quantum states.

The Riverlane team tested a variety of noise models, all inspired by superconducting qubits, and studied previously proposed leakage-reduction methods. They found that some leakage-reduction methods can actually be counterproductive under certain noise models where leakage moves efficiently between qubits.

The Impact of Leakage and Mitigation Strategies

Leakage can have a significant impact on a quantum computer. A leaked qubit can remain leaked for a long time, introducing numerous errors. To combat this, specific devices called leakage-reduction units (LRUs) are often used to frequently remove leakage.

The Riverlane team studied a specific LRU called “wiggling,” introduced by Google, which involves frequently resetting all qubits in the quantum computer. Half the qubits are measured at each round of error correction, and a reset removes leakage from the system by taking a measured qubit and returning it to the |0⟩ state.

The Challenge of Simulating Quantum Computers

Simulating quantum computers using conventional (“classical”) counterparts is incredibly challenging. This is due to the exponential growth in the amount of memory needed to fully describe a quantum state of N qubits. For example, a system with 30 qubits requires approximately 16 GB of random-access memory (RAM), which is typical for a modern laptop.

The Fujitsu simulator used in the challenge contains 512 Fujitsu A64FX processors, each featuring 32 GB of RAM. The full 512-processor system can simulate up to 39 qubits. For their experiments, the Riverlane team did not exceed 17 qutrits (requiring 34 qubits), but they repeated their simulations several hundred thousand times to collect better statistics.

The Fujitsu Quantum Simulator Challenge and Riverlane’s Teamwork

The Fujitsu Quantum Simulator Challenge ran from February to September 2023. During this global competition, Fujitsu invited industry and academia members to test Fujitsu’s 39-qubit quantum simulator on novel problems and applications. The winners were announced at the Fujitsu Quantum Day on 25th January 2024 at De Oude Bibliotheek Academy in Delft, the Netherlands.

The project was a collaborative effort by Riverlane’s multidisciplinary team, who utilized the computational resources of Fujitsu. The team members contributed in various ways, from generating the circuits and investigating the physics of leakage to writing the code and performing the simulations. The team also analyzed the data in real-time under the pressure of the deadline, uncovering bugs and insights in the process.