Variational Linear Solvers offer a potential route to solving complex equations using emerging quantum computers, but their effectiveness often diminishes as problem size increases due to challenges in preparing the quantum system. Youla Yang from Indiana University Bloomington and colleagues address this limitation by introducing PVLS, a new technique that predicts optimal starting parameters for these solvers. The method employs Graph Neural Networks to analyse the structure of the equation and generate initial settings that dramatically improve the speed and reliability of the quantum computation. Results demonstrate that PVLS accelerates optimisation by up to 2. 6times for systems ranging from small to moderately large, paving the way for more practical quantum algorithms in the near future.
PVLS learns from the structure of the linear system to predict effective initial parameters for the quantum circuit, providing a better starting point for optimization. Extensive testing demonstrates that PVLS significantly outperforms traditional initialization methods, including random initialization, Principal Component Analysis, and minimum-norm strategies, across a wide range of test cases.
The team evaluated PVLS on over 15,000 synthetic systems, ranging from 24 to 210 dimensions, and ten real-world sparse matrices. Results show that VQLSs initialized with PVLS converge in significantly fewer iterations, reducing the number of optimization steps by over 60% on average, corresponding to a 2. 6x speedup in total training time. Despite a minimal inference overhead of approximately 2 milliseconds per instance, the overall efficiency gains demonstrate the potential of PVLS for practical applications. This work addresses a critical challenge in the field, namely the difficulty of training VQLSs due to barren plateaus and inefficient parameter initialization. The team’s breakthrough involves using graph neural networks (GNNs) to predict optimal initial parameters for VQLS circuits, leveraging the structural information embedded within the coefficient matrix of the linear system. Experiments conducted on matrix sizes ranging from 16 to 1024 show that PVLS achieves up to a 2.
6x speedup in optimization, requiring fewer iterations to reach a solution while maintaining comparable accuracy to existing methods. Results show that PVLS reduces the initial cost by an average of 81. 3% and the final loss by 71% compared to random initialization. The team also evaluated PVLS on ten real-world sparse matrices, confirming its ability to generalize to practical problems and maintain robustness. The inference overhead for PVLS is minimal, at approximately 2 milliseconds per instance, highlighting its potential for efficient implementation. Through extensive testing on systems ranging from small to moderately large, the team demonstrates that PVLS consistently improves both the stability and speed of convergence compared to commonly used initialization techniques. The results show that PVLS can achieve a speedup of up to 2.
6times in optimization, requiring fewer iterations to reach a solution while maintaining comparable accuracy. This improvement stems from the method’s ability to generate effective parameter seeds, leading to a substantial reduction in overall runtime, estimated at 62. 5% compared to random initialization. While these efficiency gains are currently based on classical simulations, they suggest a strong potential for practical time savings in hybrid quantum-classical workflows.
👉 More information
🗞 PVLS: A Learning-based Parameter Prediction Technique for Variational Quantum Linear Solvers
🧠 ArXiv: https://arxiv.org/abs/2512.04909
