GPU-accelerated Tight-Binding Predicts Electronic Properties for Systems of up to 100 Million Atoms

Predicting the electronic properties of materials at the nanoscale requires immense computational power, often limiting the size and complexity of systems scientists can investigate. Yunlong Wang, Zhixin Liang, and Chi Ding, all from Nanjing University, alongside colleagues, address this challenge with a new framework called GPUTB. This method accelerates tight-binding calculations using graphics processing units, enabling rapid prediction of electronic structure for systems containing millions of atoms. The team demonstrates GPUTB’s versatility by accurately modelling both pristine graphene and more complex heterostructures, and importantly, reproduces established relationships between material properties with high precision, offering a powerful new tool for investigating the electronic behaviour of large-scale materials.

GPU-Accelerated Tight-Binding for Large Systems

Scientists developed GPUTB, a novel GPU-accelerated tight-binding framework, to efficiently calculate electronic properties for exceptionally large systems, overcoming limitations inherent in traditional ab initio methods. Recognizing the computational expense of predicting electronic behaviour at the device scale, the team engineered a machine learning approach that rapidly maps atomic structure to electronic structure, enabling calculations for systems containing up to 100 million atoms. The method employs atomic environment descriptors, allowing model parameters to adapt to varying atomic arrangements and facilitating transferability across different materials, basis sets, and exchange-correlation functionals. This innovative framework leverages the power of linear-scaling quantum transport, reducing algorithmic complexity and enabling the study of experimental-scale systems.

Rather than relying on empirical parameter fitting or complex orthogonalization procedures, GPUTB learns Hamiltonians directly from data, significantly improving both accuracy and efficiency. The team trained the model using finite-temperature structures, extending its applicability to simulations of systems at realistic operating conditions and allowing for the investigation of thermal effects on electronic properties. To validate the framework’s performance, scientists accurately reproduced the relationship between carrier concentration and room temperature mobility in graphene, demonstrating its ability to predict key material properties with high precision. Furthermore, GPUTB successfully describes complex heterojunction systems, such as h-BN/graphene, showcasing its versatility in handling diverse material combinations. By combining machine learning with advanced transport methods, the team presents a powerful computational tool for investigating electronic properties in large-scale systems, paving the way for advancements in materials science and device design.

GPUTB Accurately Simulates Large Materials Systems

This research presents GPUTB, a new computational framework designed to efficiently and accurately predict the electronic properties of large-scale materials systems. By combining a GPU-accelerated tight-binding method with a neural network that incorporates the atomic environment, GPUTB significantly reduces the computational cost associated with traditional ab initio calculations, while maintaining a high level of accuracy. The framework successfully reproduces key electronic characteristics, such as the relationship between carrier concentration and mobility in graphene, and accurately models complex heterostructures like h-BN/graphene, demonstrating its versatility and potential for simulating realistic device-scale systems. GPUTB’s ability to handle systems containing millions of atoms, including those with finite-temperature effects and structural complexities, establishes it as a powerful tool for materials science and nanotechnology. While acknowledging limitations stemming from the current basis set and network simplicity, the authors suggest that further development focusing on more complex networks and expanded basis sets will enhance its capabilities, particularly for modelling complex interfaces. Future work will likely focus on addressing these limitations and extending the framework’s applicability to an even wider range of materials and systems.

👉 More information
🗞 GPUTB: Efficient Machine Learning Tight-Binding Method for Large-Scale Electronic Properties Calculations
🧠 ArXiv: https://arxiv.org/abs/2509.06525

Quantum News

Quantum News

As the Official Quantum Dog (or hound) by role is to dig out the latest nuggets of quantum goodness. There is so much happening right now in the field of technology, whether AI or the march of robots. But Quantum occupies a special space. Quite literally a special space. A Hilbert space infact, haha! Here I try to provide some of the news that might be considered breaking news in the Quantum Computing space.

Latest Posts by Quantum News:

Toyota & ORCA Achieve 80% Compute Time Reduction Using Quantum Reservoir Computing

Toyota & ORCA Achieve 80% Compute Time Reduction Using Quantum Reservoir Computing

January 14, 2026
GlobalFoundries Acquires Synopsys’ Processor IP to Accelerate Physical AI

GlobalFoundries Acquires Synopsys’ Processor IP to Accelerate Physical AI

January 14, 2026
Fujitsu & Toyota Systems Accelerate Automotive Design 20x with Quantum-Inspired AI

Fujitsu & Toyota Systems Accelerate Automotive Design 20x with Quantum-Inspired AI

January 14, 2026