Yuya Kawamata and colleagues at The University of Osaka and Kyoto University present a new approach utilising quantum-inspired machine learning to address efficient sampling for complex optimisation problems. Their work introduces a divide-and-conquer neural network surrogate framework designed to accelerate Markov chain Monte Carlo (MCMC) methods under fixed Hamming weight constraints. Combining quantum approximate optimisation algorithms with neural network surrogates delivers speedups of approximately 20.3 and 7.6 in autocorrelation decay rate constants compared to classical methods on 3-regular graphs. Application to an MNIST feature mask optimisation problem yielded both faster convergence and a 2.03% improvement in classification accuracy, suggesting a pathway towards scalable and efficient MCMC on near-term intermediate-scale quantum (NISQ) devices.
Quantum-inspired machine learning accelerates Markov chain Monte Carlo sampling efficiency
Autocorrelation decay rate constants improved by a factor of 20.3 when utilising a new quantum-inspired machine learning method for Boltzmann sampling on 3-regular graphs. This exceeded the performance of classical pair-flip techniques. The sharp acceleration addresses a key limitation in Markov chain Monte Carlo (MCMC) methods, previously hindering efficient sampling of large, constrained problems. MCMC is a computational technique used to estimate probabilities through random sampling, and its efficiency is critically dependent on the ‘mixing’ rate of the Markov chain, how quickly it explores the entire probability space. A slow mixing rate leads to high autocorrelation, meaning successive samples are highly correlated and provide limited new information. The autocorrelation decay rate constant directly quantifies this mixing speed; a higher value indicates faster convergence and more efficient sampling. Classical MCMC methods often struggle with high-dimensional, constrained problems because designing effective proposal distributions, the mechanisms for suggesting new samples, becomes increasingly difficult. These proposal distributions need to balance exploration (searching broadly for promising regions) and exploitation (refining solutions within those regions).
Quantum approximate optimisation combined with neural network surrogates achieved a breakthrough, effectively learning and accelerating the proposal of new solutions. Applying this framework to an MNIST feature mask optimisation problem, involving 784 variables, not only yielded faster convergence but also a 2.03% increase in image classification accuracy. The approach employed a ‘divide-and-conquer’ strategy, breaking down complex problems into smaller subgraphs to enable quantum sampling using the Quantum Approximate Optimisation Algorithm, or QAOA. QAOA is a hybrid quantum-classical algorithm designed to find approximate solutions to combinatorial optimisation problems. It involves parameterising a quantum circuit and optimising its parameters to minimise a cost function representing the problem. The subgraphs are chosen to be amenable to QAOA, allowing for efficient quantum sampling. Neural network surrogates were then trained to learn from these quantum samples, creating efficient proposal distributions while upholding problem constraints, such as fixed Hamming weight. Hamming weight refers to the number of non-zero elements in a binary vector, and maintaining a fixed Hamming weight is a common constraint in feature selection and other optimisation tasks. The neural network surrogate acts as a learned approximation of the quantum sampling process, allowing for faster generation of proposals without requiring repeated quantum computations.
A 7.6% improvement in autocorrelation decay rate constant was observed when comparing the method to classical pair-flip techniques utilising non-nearest-neighbour exchanges, indicating faster exploration of potential solutions. Pair-flip techniques involve randomly flipping the values of variables to generate new samples, and non-nearest-neighbour exchanges allow for more significant changes to the solution. However, these techniques can be inefficient in high-dimensional spaces. The observed improvement suggests that the quantum-inspired approach provides a more effective way to navigate the complex probability landscape. Scaling this method to considerably larger, more complex real-world scenarios remains a substantial challenge, however. The computational cost of QAOA itself increases with problem size, and the training of the neural network surrogate requires a substantial amount of data. Further investigation will focus on adapting the approach to different problem domains and assessing its performance on larger datasets, potentially employing techniques such as distributed training and more efficient quantum algorithms.
Neural network surrogates accelerate quantum Monte Carlo simulations
Countless simulations, from materials science to financial modelling, underpin Markov chain Monte Carlo methods. These methods are used to model complex systems by simulating random processes and estimating their statistical properties. Classical approaches often falter when faced with highly constrained problems, however. The difficulty arises from the ‘curse of dimensionality’, the exponential increase in computational cost as the number of variables increases. Efficiently exploring complex probability landscapes requires clever ‘proposal’ mechanisms to suggest new solutions. Marrying quantum computation with neural networks demonstrated substantial speedups, offering a new approach to tackling complex simulations currently beyond the reach of conventional methods. The potential lies in leveraging the unique properties of quantum mechanics, such as superposition and entanglement, to explore the solution space more efficiently.
In particular, the divide-and-conquer strategy, utilising neural network ‘surrogates’ to streamline quantum sampling, demonstrably improved performance on benchmark problems. The neural network surrogate learns to mimic the behaviour of the quantum sampler, allowing for faster generation of proposals without the need for repeated quantum computations. This is particularly important for NISQ devices, which are limited in the number of qubits and the coherence time. This suggests a pathway towards more efficient modelling even with near-term quantum hardware. Experiments on specific graph structures demonstrated improved performance compared to traditional methods, and a striking increase in accuracy when optimising image features, highlighting the potential for utilising emerging quantum technology for practical applications. The MNIST feature mask optimisation problem involved identifying the most important features in images for classification, and the 2.03% improvement in accuracy demonstrates the effectiveness of the method in this context. The method’s limitations and future directions will be explored in subsequent studies, including investigations into alternative neural network architectures and quantum algorithms, as well as exploring methods to reduce the computational overhead of the QAOA component and improve the scalability of the approach.
The researchers successfully combined quantum computation with neural networks to accelerate Markov chain Monte Carlo simulations. This approach uses quantum sampling, streamlined by neural network surrogates, to efficiently explore complex problems with constraints like fixed Hamming weight. Numerical experiments on 3-regular graphs showed average improvements in the autocorrelation decay rate constant by factors of 20.3 and 7.6 compared to classical methods. Applying this method to an MNIST feature mask optimisation problem with 784 features also resulted in a 2.03% increase in classification accuracy, and the authors intend to explore alternative neural network architectures and quantum algorithms in future work.
👉 More information
🗞 Divide-and-Conquer Neural Network Surrogates for Quantum Sampling: Accelerating Markov Chain Monte Carlo in Large-Scale Constrained Optimization Problems
🧠 ArXiv: https://arxiv.org/abs/2604.20701
