A team of researchers from various institutions have developed a framework for using Quantum Machine Learning (QML) to enhance dynamical simulations on near-term quantum hardware. The team used generalization bounds to analyze the training data requirements of an algorithm within this framework. The algorithm is resource-efficient in terms of qubit and data requirements. The researchers also developed a QML-inspired algorithm for dynamical simulation called the resource-efficient fast-forwarding algorithm (REFF). This algorithm uses training data to learn a circuit that allows for fast-forwarding, where long-time simulations can be performed using a fixed-depth circuit.
What is the Potential of Quantum Machine Learning (QML) in Dynamical Simulations?
Quantum Machine Learning (QML) and dynamical simulations have been independently recognized as potential applications for quantum advantage. However, the possibility of using QML to enhance dynamical simulations has not been thoroughly investigated. A team of researchers, including Joe Gibbs, Zoë Holmes, Matthias C Caro, Nicholas Ezzell, HsinYuan Huang, Lukasz Cincio, Andrew T Sornborger, and Patrick J Coles, from various institutions such as the University of Surrey, AWE Aldermaston, Los Alamos National Laboratory, Ecole Polytechnique Fédéderale de Lausanne, Technical University of Munich, Munich Center for Quantum Science and Technology, Freie Universität Berlin, Caltech, and the University of Southern California, have developed a framework for using QML methods to simulate quantum dynamics on near-term quantum hardware.
The researchers used generalization bounds, which bound the error a machine learning model makes on unseen data, to rigorously analyze the training data requirements of an algorithm within this framework. Their algorithm is resource-efficient in terms of qubit and data requirements. Preliminary numerics for the XY model exhibit efficient scaling with problem size, and they were able to simulate 20 times longer than Trotterization on IBMQBogota.
How Can QML Enhance Dynamical Simulations?
The exponential speedup of dynamical quantum simulation provided the original motivation for quantum computers. In the long term, large-scale quantum simulations are expected to transform fields such as materials science, chemistry, and high-energy physics. Near term, since efficient classical dynamical simulation methods are lacking in contrast to those for computing static quantum properties like electronic structure, dynamical simulation may plausibly be one of the first applications to see quantum advantage.
Achieving near-term quantum advantage for dynamics will require long-time simulations on noisy intermediate-scale quantum (NISQ) hardware. Standard methods like Trotterization grow the circuit depth in proportion to the simulation time, ultimately running into the decoherence time of the NISQ device. Fast-forwarding methods for long-time simulations on NISQ devices have recently been introduced, but are limited by various inefficiencies, such as qubit and data requirements. The researchers addressed these inefficiencies, potentially opening the door for near-term quantum advantage.
What is the Role of QML in Quantum Advantage?
Quantum Machine Learning (QML) has emerged as another potential application for quantum advantage. At its core, QML involves using classical or quantum data to train a parameterized quantum circuit. A number of promising paradigms for training are being pursued, including variational quantum algorithms using training data, quantum generative adversarial networks, and quantum kernel methods, to name just a few. The researchers sought to combine the potential of QML and dynamical simulation by leveraging recent advances in QML to reduce resource requirements for dynamical simulation.
To assess the scalability of QML methods as well as their applicability to real-world problems, it is critical to understand their training data requirements, quantified by so-called generalization bounds. These provide bounds on the error a machine learning model makes on unseen data as a function of the amount of data the model is trained on and of the training performance. In this paper, the researchers assessed the training data requirements of QML approaches to dynamical simulation.
What is the Resource-Efficient Fast-Forwarding Algorithm (REFF)?
The researchers’ analysis provides the groundwork for a QML-inspired algorithm for dynamical simulation that they call the resource-efficient fast-forwarding algorithm (REFF). This algorithm uses training data to learn a circuit that allows for fast-forwarding, where long-time simulations can be performed using a fixed-depth circuit. The REFF algorithm is efficient in the amount of training data required. It is also qubit efficient in the sense that simulating an n-qubit system only requires n qubits, in contrast to earlier paper which required 2 n qubits.
The researchers used generalization bounds to rigorously lower bound the final simulation fidelity as a function of the amount of training data used, the optimization quality (i.e., final cost obtained), and the simulation time. This analysis is complemented by numerical implementations as well as a demonstration of their algorithm on IBMQBogota.
Publication details: “Dynamical simulation via quantum machine learning with provable generalization”
Publication Date: 2024-03-05
Authors: Joe Gibbs, Zoë Holmes, Christoph Matthias, Nicholas Ezzell, et al.
Source: Physical review research
DOI: https://doi.org/10.1103/physrevresearch.6.013241
