Fermionic quantum turbulence, a complex phenomenon that underlies occurrences such as pulsar glitches in rapidly spinning neutron stars, can be simulated using ultracold atoms. These simulations, which push the boundaries of current classical computers, have been made possible through improvements in the Eigenvalue soLvers for Petaflop Applications (ELPA) library. This technology enables the diagonalization of matrices of record size. The research is significant as it advances high-performance computing and provides a platform for further scientific progress in the field of ultracold Fermi gases. The data and source codes from these simulations are made available for further scientific progress.
What is the Significance of Fermionic Quantum Turbulence in High-Performance Computing?
Fermionic quantum turbulence is a complex phenomenon that underlies puzzling occurrences such as pulsar glitches in rapidly spinning neutron stars. Ultracold atoms provide a platform for an analog quantum computer capable of simulating this turbulence. Unlike other platforms like liquid helium, ultracold atoms have a viable theoretical framework for dynamics. However, these simulations push the boundaries of current classical computers. The largest simulations of fermionic quantum turbulence to date have been presented, and the computing technology needed, especially improvements in the Eigenvalue soLvers for Petaflop Applications (ELPA) library, is explained. This technology enables the diagonalization of matrices of record size, millions by millions.
The process of dissipation and thermalization in fermionic quantum turbulence is quantified using the internal structure of vortices as a new probe of the local effective temperature. All simulation data and source codes are made available to facilitate rapid scientific progress in the field of ultracold Fermi gases. This research is significant as it pushes the limits of high-performance computing and provides a platform for further scientific progress in the field of ultracold Fermi gases.
How Does High-Performance Computing Complement Theoretical and Experimental Physics?
Computation is regarded as the third pillar of physical science, complementing theoretical and experimental physics. Each pillar has its unique methodology: theoretical physics relies on mathematical analysis, measurements are the central interest of experimental physics, and numerical modeling is the heart of computational physics. Many recent breakthroughs, like observing the Higgs boson or detecting gravitational waves, would not have been possible without advanced numerical analysis capabilities that adapt algorithmic breakthroughs to evolving hardware.
The synergy between theory and computation is demonstrated through advances in linear algebra libraries that enable Europe’s fastest supercomputer, LUMI, to diagonalize matrices of record size. This allows for the simulation of turbulent dynamics in quantum systems superfluids. These simulations are used to investigate how vortices dissipate energy, driving quantum turbulence in neutron stars and ultracold atom experiments.
What Challenges Does High-Performance Computing Face?
As Moore’s law bottoms out, using high-performance computing (HPC) effectively becomes a significant challenge. Current HPC systems consist of thousands of interconnected nodes, each comprising dozens of computing cores or multiple hardware accelerators. Specifically, accelerators like graphics processing units (GPUs) account for most of the computing power on modern platforms. Leadership supercomputers can compute from 10^17 floating point operations per second (FLOPS) for pre-exascale systems to 10^18 FLOPS, or exaflops, for exascale systems.
The top three supercomputers, according to the Top 500 list (June 2023), are Frontier (Oak Ridge National Laboratory, USA) with 1.19 Eflops, Supercomputer Fugaku (RIKEN Center for Computational Science, Japan) with 0.44 Eflops, and LUMI (EuroHPC-CSC, Finland) with 0.31 Eflops. LUMI, the fastest European system, is used to demonstrate some of its capabilities to advance computational physics.
How is Software Adapted for High-Performance Computing?
While the computational potential of HPC is enormous, using these capabilities requires a highly tuned software stack capable of dealing with massive parallel and heterogeneous architectures. Core scientific libraries are constantly being adjusted to maximize performance on new hardware. These include Fast Fourier Transforms, linear algebra routines, libraries for matrix decomposition, random number generators, and solvers for algebraic and differential equations. These core libraries form the building blocks for the efficient domain-specific scientific packages that enable physics breakthroughs in the domain of quantum mechanics.
How Does Density Functional Theory Aid in Simulating Quantum Dynamics?
Simulating quantum dynamics is one of the hardest challenges for classical computers due to the exponentially large size of a many-body wavefunction. Even storing the wavefunction for a modest nucleus like tin with 100 nucleons would require more bytes than there are atoms in the visible universe. The techniques of density functional theory (DFT) and its time-dependent extension, time-dependent density functional theory (TDDFT), have revolutionized the ability to study quantum dynamics. This is achieved by replacing the need to store the many-body wavefunction with an energy functional of a handful of densities. Despite needing to approximate the form of the functional, TDDFT has become one of the most successful methods for simulating dynamics in quantum systems.
Publication details: “Fermionic quantum turbulence: Pushing the limits of high-performance computing”
Publication Date: 2024-04-15
Authors: Gabriel Wlazłowski, Michael McNeil Forbes, Sananda Sarkar, Andreas Marek, et al.
Source: PNAS nexus
DOI: https://doi.org/10.1093/pnasnexus/pgae160
