Tensor Networks Model Fluid Flow Around Objects With High Accuracy

The simulation of fluid dynamics around complex geometries presents a significant computational challenge, demanding substantial memory and processing power as system size increases. Researchers are now exploring novel algorithmic approaches to mitigate these limitations, with a focus on representing fluid flow data more efficiently. Nis-Luca van Hülst, Hashizume, and colleagues, from the University of Hamburg and Hamburg University of Technology, detail a new method in their paper, “Quantum-Inspired Tensor-Network Fractional-Step Method for Incompressible Flow in Curvilinear Coordinates”, which leverages tensor networks, a mathematical framework originally developed in quantum physics, to compress the representation of fluid flow fields and the operators that govern their behaviour. Their work demonstrates accurate simulations of flow around cylinders, achieving compression rates of up to 20 for flow fields and 1000 for differential operators, with errors remaining below 0.3% when compared to conventional finite difference methods. This approach suggests a pathway towards substantial resource savings in simulating larger, more complex fluid dynamic systems.

Computational fluid dynamics (CFD) continually demands increased spatial and temporal resolution to accurately model complex physical phenomena, particularly when investigating multiphysics and multiscale dynamics. This drive for higher fidelity has historically been limited by computational constraints, necessitating approximate methods such as turbulence closure models and reduced order models which simplify governing equations or reduce system complexity. While these approaches reduce computational demands, they often sacrifice accuracy and generality, struggling to provide reliable long-term predictions due to inherent approximations in the dynamics. The potential of quantum computing offers a novel approach to overcome these computational hurdles by leveraging the principles of quantum mechanics to perform calculations in a fundamentally different way than classical computers, potentially offering exponential speedups for certain problems.

Recent investigations explore various quantum-based strategies for CFD, but significant challenges remain in simulating nonlinear differential equations that govern fluid flow and accurately measuring quantum flow states. This work introduces an algorithmic framework based on tensor networks, a mathematical tool for efficiently representing high-dimensional data, enabling simulations using highly compressed representations of both the flow fields and the differential operators that describe the fluid dynamics. The method demonstrates accurate results for flows around immersed objects, offering substantial memory savings and potential runtime advantages compared to conventional finite difference simulations.

Computational fluid dynamics routinely demands substantial computational resources, particularly when simulating high-resolution flows around complex geometries, prompting researchers to employ tensor network techniques, specifically tensor train (TT) decomposition, to address these challenges and achieve greater efficiency. This innovative approach departs from traditional finite difference methods by representing the solution field – encompassing velocity and pressure – in a compressed, low-rank format, reducing the amount of data needed to accurately describe the flow and lowering both memory requirements and computational cost. The core principle involves approximating the complex flow field with a series of interconnected, lower-dimensional tensors, effectively capturing the dominant features while discarding less significant details.

The presented research combines the simplicity of finite difference discretisation, a well-established technique for approximating solutions to differential equations, with the efficiency gains offered by TT decomposition, allowing for a relatively straightforward implementation while still delivering substantial performance improvements. The method’s efficacy stems from its ability to compress not only the flow field itself, but also the differential operators that govern the fluid’s behaviour, exploiting inherent redundancies to minimise the amount of data required for accurate calculations. Rigorous validation demonstrates the reliability of this new methodology through meticulous comparison against established finite difference codes, including OpenFOAM, and published results from existing literature, focusing on benchmark problems such as the flow around stationary and rotating cylinders.

These comparisons reveal excellent quantitative agreement, with errors consistently below 0.3% for flow field compressions of up to a factor of 20 and operator compressions reaching factors of 1000 compared to traditional sparse matrix representations, confirming the method’s ability to capture the essential physics of the flow while operating with significantly reduced computational resources. Furthermore, the research details specific algorithmic modifications necessary for simulating transient, or time-dependent, flows, including an initial modulation of the velocity field to ensure stability and a carefully designed preconditioner for the Poisson equation, highlighting the practicality of the method and its ability to handle realistic flow scenarios. The portability of the framework to parallel computing architectures further enhances its potential for scaling to even larger problems, promising significant advancements in the field of computational fluid dynamics.

This research presents a novel algorithmic framework leveraging tensor networks to simulate fluid flows around immersed objects, specifically within curvilinear coordinate systems, representing a significant advancement in numerical fluid dynamics. The core innovation lies in representing both flow fields and differential operators using highly compressed tensor representations, significantly reducing computational demands and enabling accurate representation of flow fields and differential operators in a compressed format. Researchers demonstrate the practical application of this method by simulating both steady and transient flows around stationary and rotating cylinders, achieving excellent quantitative agreement with established finite difference simulations regarding Strouhal numbers, forces, and velocity fields.

The study validates the approach through rigorous comparison with conventional finite difference methods, reporting errors of less than 0.3% even with flow field compressions reaching a factor of 20 and differential operator compressions up to 1000, relative to sparse matrix representations, demonstrating the method’s efficiency and scalability. This compression directly translates into reduced memory requirements and suggests a favourable scaling behaviour with system size, promising significant resource savings for larger, more complex simulations. Researchers establish strong numerical evidence suggesting that the tensor network approach scales favourably with system size, promising substantial resource savings when simulating larger, more complex systems, and the framework’s portability to Graphics Processing Units (GPUs) further enhances its potential for scalability, offering additional performance gains.

The initial modulation technique employed within the algorithm ensures stability and accuracy during transient simulations, effectively managing numerical diffusion, paving the way for more efficient and scalable simulations in various scientific and engineering applications. The detailed implementation of tensor operations, including contraction and differentiation, is presented, allowing for reproducibility and further development of the framework, offering a viable pathway towards simulating complex flows with reduced computational resources.

Future work should focus on extending the application of this framework to more complex geometries and flow regimes, investigating the method’s performance with turbulent flows and flows involving heat transfer, and exploring adaptive refinement strategies within the tensor network framework to optimise computational efficiency by concentrating resolution in regions of high gradients. A key area for future research involves a detailed comparative analysis of computational cost and performance against other dimensionality reduction techniques commonly employed in computational fluid dynamics, providing a clearer understanding of the specific scenarios where the tensor network approach offers the most significant advantages. Finally, developing automated workflows for generating and manipulating the tensor network representations could broaden the accessibility and usability of this promising technique.

👉 More information
🗞 Quantum-Inspired Tensor-Network Fractional-Step Method for Incompressible Flow in Curvilinear Coordinates
🧠 DOI: https://doi.org/10.48550/arXiv.2507.05222

Quantum News

Quantum News

As the Official Quantum Dog (or hound) by role is to dig out the latest nuggets of quantum goodness. There is so much happening right now in the field of technology, whether AI or the march of robots. But Quantum occupies a special space. Quite literally a special space. A Hilbert space infact, haha! Here I try to provide some of the news that might be considered breaking news in the Quantum Computing space.

Latest Posts by Quantum News:

Scientists Guide Zapata's Path to Fault-Tolerant Quantum Systems

Scientists Guide Zapata’s Path to Fault-Tolerant Quantum Systems

December 22, 2025
NVIDIA’s ALCHEMI Toolkit Links with MatGL for Graph-Based MLIPs

NVIDIA’s ALCHEMI Toolkit Links with MatGL for Graph-Based MLIPs

December 22, 2025
New Consultancy Helps Firms Meet EU DORA Crypto Agility Rules

New Consultancy Helps Firms Meet EU DORA Crypto Agility Rules

December 22, 2025