Simulating the behaviour of electrons in strong electromagnetic fields presents a significant challenge for modern computing, demanding increasingly powerful methods to model relativistic quantum effects, and researchers are now pushing the boundaries of what’s possible. Johanne Elise Vembea from University of Bergen, Marcin Krotkiewski from University of Oslo, Magnar Bjørgve from Norwegian Research Infrastructure Services, NRIS, et al. have developed GaDE, a new solver for the time-dependent Dirac equation that harnesses the power of modern graphics processing units (GPUs) and distributed high-performance computing. This achievement unlocks unprecedented performance and scalability, demonstrated through testing on the LUMI supercomputer, and paves the way for more accurate predictive simulations of ultra-intense laser experiments and a deeper understanding of fundamental relativistic quantum phenomena. The team’s implementation achieves remarkable efficiency, scaling to 2048 GPUs with 85% parallel efficiency, and represents a crucial step towards utilising exascale systems for complex atomic physics.
Modern heterogeneous high-performance computing (HPC) systems, powered by advanced graphics processing unit (GPU) architectures, enable accelerating computing with unprecedented performance and scalability. This work presents a GPU-accelerated solver for the three-dimensional (3D) time-dependent Dirac Equation, optimised for distributed HPC systems. The solver, named GaDE, is designed to simulate the electron dynamics in atoms induced by electromagnetic fields in the relativistic regime. It combines MPI with CUDA/HIP to target both NVIDIA and AMD GPU architectures, and the implementation strategies employed are discussed.
Relativistic Calculations for Strong-Field Atomic Physics
Research focuses on relativistic quantum calculations for atomic and molecular physics, particularly in strong fields, with a strong emphasis on high-performance computing (HPC) and GPU acceleration. Key areas include strong-field physics, such as photoionization and interactions of intense laser fields with atoms and molecules, alongside solving the Dirac equation for atomic and molecular systems. Computational methods employed include B-spline basis sets for representing wavefunctions, finite difference techniques, parallel computing, graph partitioning using methods like METIS and PT-Scotch, and sparse matrix techniques. GPU acceleration is central to achieving significant speedups in these calculations, and investigations extend to zeptosecond physics, exploring extremely fast processes at the zeptosecond timescale. This interdisciplinary research, combining theoretical physics, computational physics, and computer science, demonstrates a trend towards exascale computing, aiming for both accurate results and computational feasibility.
GaDE Solves Relativistic Electron Dynamics on GPUs
Scientists have developed GaDE, a GPU-accelerated solver for the three-dimensional time-dependent Dirac equation, designed for distributed high-performance computing systems. The solver simulates electron dynamics in atoms subjected to electromagnetic fields within the relativistic regime, combining the Message Passing Interface (MPI) with CUDA and HIP programming models for operation on both NVIDIA and AMD GPU architectures. GPU-aware MPI maximizes communication performance, and experiments on the LUMI pre-exascale supercomputer, powered by MI250X GPUs and HPE’s Slingshot interconnect, demonstrate GaDE’s capabilities in simulating complex quantum phenomena. Performance tests reveal comparable compute and memory bandwidth between NVIDIA A100 and MI250X GPUs, with the GH200 delivering higher performance.
Weak scaling tests on LUMI achieved 85% parallel efficiency while utilizing 2048 GPUs, demonstrating the solver’s ability to maintain performance as the problem size and number of processors increase. Strong scaling experiments further demonstrate a 16x speedup on 32 GPUs, representing 50% efficiency for this communication-intensive calculation, a significant achievement for this type of calculation. These results confirm GaDE’s high scalability and suitability for exascale computing environments, opening new avenues for predictive simulations of ultra-intense laser experiments and enabling detailed investigations into relativistic quantum effects, previously inaccessible due to computational limitations. The breakthrough delivers a powerful tool for researchers seeking to understand the fundamental interactions between light and matter at the relativistic level, and promises to advance the field of ultrafast laser physics and quantum dynamics.
GaDE Achieves Scalable Dirac Equation Solver
The development of GaDE represents a significant advance in computational physics, delivering a highly scalable solver for the three-dimensional time-dependent Dirac equation. Researchers successfully implemented a GPU-accelerated method, combining Message Passing Interface (MPI) with CUDA and HIP, to simulate electron dynamics in atoms subjected to strong electromagnetic fields, a process crucial for understanding relativistic quantum phenomena. This solver targets both NVIDIA and AMD GPU architectures, broadening its applicability and potential impact on diverse high-performance computing environments. Evaluations on a leading pre-exascale supercomputer demonstrate GaDE’s exceptional performance, achieving 85% parallel efficiency when distributed across 2048 GPUs and a sixteen-fold speedup using 32 GPUs. These results confirm the solver’s ability to handle communication-intensive calculations efficiently, paving the way for simulations of ultra-intense laser experiments and furthering investigations into relativistic quantum effects. While the method’s spherical coordinate approach has limitations with linearly polarized laser pulses, the solver’s scalability positions it well for future exascale systems, and future work may focus on refining laser polarization handling or extending the solver to more complex atomic systems.
👉 More information
🗞 GaDE — GPU-acceleration of time-dependent Dirac Equation for exascale
🧠 ArXiv: https://arxiv.org/abs/2512.21697
