The accurate simulation of material behaviour at the atomic level is crucial for advancements in fields ranging from materials science to drug discovery, yet remains computationally demanding. Machine-learning interatomic potentials (MLIPs) offer a pathway to accelerate these simulations by approximating the complex interactions between atoms, balancing accuracy with reduced computational cost. Researchers are continually refining these techniques, and a team led by Hongfu Huang, Junhao Peng, Kaiqi Li, Jian Zhou, and Zhimei Sun from Beihang University and Guangdong University of Technology present a novel approach to training neuroevolution potentials (NEP) with analytical gradients, detailed in their article, “Efficient GPU-Accelerated Training of a Neuroevolution Potential with Analytical Gradients”. Their work focuses on enhancing the efficiency of NEP training, traditionally a derivative-free process, by incorporating explicit analytical gradients and the Adam optimiser, thereby substantially reducing training time while maintaining predictive accuracy and physical interpretability, as demonstrated through simulations of antimony-telluride systems.
Recent advances in materials science increasingly depend on accurate interatomic potentials, which underpin large-scale molecular dynamics simulations and accelerate materials discovery. Machine-learning interatomic potentials offer a balance between computational efficiency and mechanical accuracy, and neuroevolution potentials (NEPs) represent a promising avenue within this field. A newly developed gradient-optimized neuroevolution potential (GNEP) training framework addresses limitations in the computational efficiency of conventional NEPs, establishing a robust methodology for developing transferable interatomic potentials across diverse materials systems. Researchers demonstrate the GNEP framework’s efficacy through applications to silicon, amorphous silica, GeTe/Sb2Te3 superlattices, Pd-Cu-Ni-P alloys, ICOF-10n-Li/Na frameworks, and antimony-telluride (Sb-Te) systems, showcasing its versatility and potential for widespread adoption.
Conventional NEP training typically relies on derivative-free optimization methods, which can be computationally expensive. The GNEP framework utilises gradient-based optimization, significantly improving efficiency by leveraging information about the slope of the error surface to guide the learning process. This approach allows for faster convergence and reduces the computational resources required for training.
The GNEP framework undergoes meticulous validation through application to antimony-telluride (Sb-Te) systems, encompassing crystalline, liquid, and disordered phases. Results indicate a substantial reduction in fitting time compared to conventional NEP training. Rigorous validation against density functional theory (DFT), a quantum mechanical method used to investigate the electronic structure of materials, confirms that the fitted potentials maintain high accuracy and transferability, meaning they can reliably predict material behaviour across different configurations.
Further demonstrating the versatility of the GNEP framework, researchers train potentials on six diverse materials datasets, including silicon, amorphous silica, GeTe/Sb2Te3, PdCuNiP, and ICOF-10n-Li/Na (n=1,2,3). Training curves displaying root mean squared error (RMSE) – a measure of the difference between predicted and actual values – against both time and epochs, illustrate the performance of both the Adam and Sequential Newton-Raphson (SNES) optimization algorithms across these varied materials. The consistent performance across these datasets highlights the generalizability of the GNEP approach and its potential for application to large-scale molecular dynamics simulations, where computational cost is a significant constraint.
Future work should focus on expanding the range of materials systems investigated, including more complex compositions and structures. Investigating the transferability of the potentials to different thermodynamic conditions, such as varying temperatures and pressures, is also essential. Further optimization of the GNEP framework, potentially through the exploration of alternative gradient-based optimizers or adaptive learning rate strategies, could yield even greater improvements in training efficiency.
A key area for future research involves integrating the developed potentials into larger-scale simulations to explore emergent material properties and phenomena, such as defect formation, diffusion mechanisms, and mechanical behaviour under extreme conditions. Combining the GNEP framework with active learning strategies, where the model iteratively requests data from the most informative regions of the configuration space, promises to further accelerate the development of accurate and efficient interatomic potentials. Researchers anticipate that these advancements will significantly contribute to the acceleration of materials discovery and the design of novel materials with tailored properties. The development of robust and efficient interatomic potentials remains a critical challenge in materials science, and the GNEP framework represents a significant step forward in addressing this challenge.
👉 More information
🗞 Efficient GPU-Accelerated Training of a Neuroevolution Potential with Analytical Gradients
🧠 DOI: https://doi.org/10.48550/arXiv.2507.00528
