Matrix-free Neural Preconditioner Accelerates Dirac Operator Solutions in Lattice Gauge Theory

Solving the complex calculations within lattice gauge theory presents a significant computational challenge, demanding efficient methods for handling large linear systems, and a team led by Yixuan Sun and Srinivas Eswar from Argonne National Laboratory, along with Yin Lin, William Detmold, and Phiala Shanahan from the Massachusetts Institute of Technology, now advances a novel approach to tackle this problem. Their research introduces a ‘matrix-free’ neural preconditioning technique that accelerates calculations without the need for explicitly constructing large matrices, a common bottleneck in these simulations. This method learns to improve the efficiency of iterative solvers, dramatically reducing the number of calculations required to achieve accurate results, and the team demonstrates a near halving of iterations needed for convergence in specific scenarios. Crucially, the framework exhibits a remarkable ability to generalise, learning patterns applicable to different lattice sizes, paving the way for more efficient and scalable simulations of fundamental particle interactions, with Xiaoye Li also contributing to this achievement.

Neural Networks Accelerate Quantum Chromodynamics Simulations

Scientists are harnessing the power of neural networks to accelerate simulations in lattice quantum chromodynamics (QCD), a fundamental theory describing the strong force. These simulations, essential for understanding the building blocks of matter, require solving complex linear systems that can be computationally demanding. Researchers developed a novel approach using neural networks, specifically Fourier Neural Operators and Fully Convolutional Networks, as preconditioners to improve the efficiency of iterative solvers. This research demonstrates that neural networks can learn to approximate the inverse of the Dirac operator, accelerating the convergence of these solvers through a matrix-free training technique crucial for very large systems.

Experiments show that both Fourier Neural Operators and Fully Convolutional Networks effectively reduce the condition number of the system compared to solving it directly. Increasing the number of random vectors used during training improves performance, although benefits diminish beyond a certain point. The neural network-based preconditioners demonstrate promising scalability with increasing lattice size, a critical factor for tackling increasingly complex problems. This approach offers several advantages, including its matrix-free nature, which avoids the need to store and manipulate large matrices.

The neural network-based preconditioners can potentially generalize to different lattice configurations and parameters without retraining, further enhancing their efficiency. Future research will focus on exploring different network architectures, developing adaptive training strategies, and integrating these preconditioners into existing production QCD codes. This work represents a significant step towards accelerating QCD calculations and gaining deeper insights into the fundamental laws of nature.

Operator Learning Accelerates Lattice Gauge Theory Calculations

Scientists have developed a new framework for accelerating calculations within lattice gauge theory, a cornerstone of particle physics. This research addresses the computational expense of solving linear systems that arise when simulating these theories by leveraging operator learning techniques to construct effective preconditioners. This innovative approach avoids the need for explicit matrices, improving efficiency during both model training and application. Experiments using a simplified gauge theory demonstrate the effectiveness of this approach, measuring a significant reduction in the condition number of linear systems, a key indicator of solver efficiency.

Applying the learned preconditioners approximately halved the number of iterations required for convergence in relevant parameter ranges, and this improvement was observed across lattice sizes of 8, 16, 32, and 64, demonstrating the framework’s scalability. Further analysis revealed the framework’s ability to learn a general mapping dependent on lattice structure, enabling zero-shot learning for Dirac operators constructed from gauge field configurations of different sizes. Researchers rigorously compared the performance of their neural network-based preconditioners against standard techniques like incomplete Cholesky and even-odd preconditioning, finding that while incomplete Cholesky achieves slightly lower condition numbers, the neural network approach avoids explicit matrix decomposition and solves, offering a computational advantage. This research represents a substantial advance in the field, offering a versatile and efficient tool for tackling computationally demanding problems. Future research will extend this framework to more complex gauge groups and explore alternative structures for the preconditioning operators, promising to further enhance the capabilities and applicability of this innovative approach to computational particle physics.

Learned Preconditioners Accelerate Lattice Simulations

Scientists are employing operator learning to construct effective preconditioners for iterative solvers used in lattice chromodynamics. This research addresses the computational expense of solving Hermitian positive definite systems by leveraging operator learning techniques to create these preconditioners without relying on explicit matrices, enabling efficient model training and application. Experiments using a simplified gauge theory demonstrate the effectiveness of this approach, measuring a significant reduction in the condition number of linear systems, a key indicator of solver efficiency. Applying the learned preconditioners approximately halved the number of iterations required for convergence in relevant parameter ranges, and this improvement was observed across lattice sizes of 8, 16, 32, and 64, demonstrating the framework’s scalability.

Further analysis revealed the framework’s ability to learn a general mapping dependent on lattice structure, enabling zero-shot learning for Dirac operators constructed from gauge field configurations of different sizes. Researchers rigorously compared the performance of their neural network-based preconditioners against standard techniques like incomplete Cholesky and even-odd preconditioning, finding that while incomplete Cholesky achieves slightly lower condition numbers, the neural network approach avoids explicit matrix decomposition and solves, offering a computational advantage. This research represents a significant step forward in computational particle physics, offering a versatile and efficient tool for tackling computationally demanding problems. Future research will extend this framework to more complex gauge groups and explore alternative structures for the preconditioning operators.

👉 More information
🗞 Matrix-free Neural Preconditioner for the Dirac Operator in Lattice Gauge Theory
🧠 ArXiv: https://arxiv.org/abs/2509.10378

Quantum News

Quantum News

As the Official Quantum Dog (or hound) by role is to dig out the latest nuggets of quantum goodness. There is so much happening right now in the field of technology, whether AI or the march of robots. But Quantum occupies a special space. Quite literally a special space. A Hilbert space infact, haha! Here I try to provide some of the news that might be considered breaking news in the Quantum Computing space.

Latest Posts by Quantum News:

Toyota & ORCA Achieve 80% Compute Time Reduction Using Quantum Reservoir Computing

Toyota & ORCA Achieve 80% Compute Time Reduction Using Quantum Reservoir Computing

January 14, 2026
GlobalFoundries Acquires Synopsys’ Processor IP to Accelerate Physical AI

GlobalFoundries Acquires Synopsys’ Processor IP to Accelerate Physical AI

January 14, 2026
Fujitsu & Toyota Systems Accelerate Automotive Design 20x with Quantum-Inspired AI

Fujitsu & Toyota Systems Accelerate Automotive Design 20x with Quantum-Inspired AI

January 14, 2026