Tensor network states form a crucial method for simulating strongly correlated systems, and researchers are continually refining these techniques to tackle increasingly complex problems. Thomas Barthel, from the National Quantum Laboratory at the University of Maryland, alongside colleagues, investigates the computational cost of two prominent tensor network approaches: matrix product states (MPS) and tree tensor network states (TTNS). Their work benchmarks these simulation methods for applications spanning condensed matter, nuclear and particle physics, focusing on two and three-dimensional systems. Surprisingly, the study reveals that MPS simulations can outperform TTNS for larger systems, despite the potential for reduced complexity in TTNS, and establishes a clear understanding of cost scaling under realistic entanglement conditions. This research is significant as it provides vital guidance for selecting the most efficient simulation strategy when dealing with systems exhibiting area-law entanglement.
Their work benchmarks these simulation methods for applications spanning condensed matter, nuclear and particle physics, focusing on two and three-dimensional systems. This research is significant as it provides vital guidance for selecting the most efficient simulation strategy when dealing with systems exhibiting area-law entanglement.
TTNS and MPS Scaling with System Size
The study rigorously compared the computational efficiency of tree tensor network states (TTNS) and matrix product states (MPS) for simulating strongly correlated systems. Researchers focused on determining how computational costs scale with system size under various boundary conditions in two and three dimensions, assuming entanglement obeys a logarithmic area law where bond dimensions scale exponentially with surface area. Experiments employed both open and periodic boundary conditions to comprehensively assess performance. For two-dimensional systems, the team investigated L × L and 4L × 4.25L geometries, revealing that while TTNS theoretically offers reduced graph distance, MPS simulations proved more efficient for low-energy states.
Specifically, implementing periodic boundary conditions in one direction with MPS required careful consideration to avoid a quadratic increase in bond dimensions, achieving costs of O(q5L) or O(q3L) through established methods. Conversely, TTNS on the L × L torus resulted in a total computational cost of O(q8L), significantly exceeding the MPS cost. Extending the analysis to three dimensions, scientists explored cuboid lattices with periodic boundary conditions in the y and z directions, arranging MPS sites along a Hamiltonian path. They found that cutting any edge of the MPS resulted in computational costs scaling as O(q3L2).
Binary TTNS simulations were then implemented, systematically splitting the cuboid into smaller segments, revealing costs of O(q8L2) per contraction and singular value decomposition. Further investigations into L3 cubes with varying boundary conditions consistently showed TTNS scaling at O(q11/2 L2) or O(q4L2), remaining substantially higher than the corresponding MPS cost of O(q3L2). This detailed comparison highlights the methodological innovation of precisely mapping computational costs to boundary conditions, enabling a clear demonstration of MPS’s superior efficiency in these simulations.
Tensor Networks MPS Outperform TTNS in Scaling
Scientists have demonstrated a surprising result regarding the efficiency of tensor network states (TTNS) used for simulating strongly correlated systems. Their work rigorously compares the computational costs of TTNS with the more established matrix product states (MPS) for two- and three-dimensional simulations, particularly focusing on systems obeying entanglement area laws. Experiments reveal that despite the potential for reduced graph distances in TTNS, MPS simulations are, in fact, more efficient for large system sizes. This finding challenges initial expectations and necessitates a re-evaluation of optimal simulation strategies. The research team meticulously determined the scaling of computational costs for both MPS and TTNS under various boundary conditions.
Assuming that bond dimensions scale exponentially with the surface area of associated subsystems, they established that the time complexity per optimization step is the crucial metric for comparison. Results demonstrate a cost separation in large systems, where MPS, utilizing a snake or helical mapping, outperform TTNS with an exponential advantage. Specifically, the increased contraction costs inherent in TTNS ultimately outweigh the benefits of their potentially smaller graph distances, at least asymptotically. Detailed analysis connected bond dimensions in both MPS and TTNS to Rényi entanglement entropies for appropriate system bipartitions.
The study showed that for any given approximation accuracy, relevant bond dimensions scale up to polynomial prefactors as Mi ∼ ec|∂Ai|, where |∂Ai| represents the surface area of a subsystem. This scaling is a natural consequence of entanglement and log-area laws commonly observed in ground and low-energy states of systems with finite-range interactions. Scientists recorded that the computational cost for MPS scales as O(M3), where M represents the matrix dimensions. Further investigations focused on binary TTNS with a vertex degree of three, revealing that the cost of single-site algorithms scales as O(Mz+1), where z is the vertex degree.
The most computationally intensive operations in both MPS and TTNS , singular value decompositions and effective-Hamiltonian contractions , were carefully examined. Tests prove that for TTNS, these operations have a cost of O(M1M2(M3)2) when considering a tensor with bond dimensions M1, M2, and M3, highlighting the critical role of bond dimension scaling in determining overall efficiency. This breakthrough delivers a nuanced understanding of tensor network performance and informs the development of more efficient simulation techniques for complex physical systems.
Entanglement Scaling Dictates MPS Efficiency
This work establishes a rigorous connection between entanglement scaling and computational cost in tensor network simulations, specifically comparing tree tensor networks (TTNS) and matrix product states (MPS). The authors demonstrate that, under the assumption of entanglement area laws, bond dimensions scale exponentially with subsystem surface area for both MPS and TTNS approximations of quantum many-body states. Crucially, they determine that despite this shared scaling, MPS simulations are surprisingly more efficient than TTNS for simulating low-energy states in two and three dimensions, particularly for larger systems. The research details how computational costs scale with system size and different boundary conditions for both tensor network types, building upon established bounds for bond dimensions based on Rényi entanglement entropies. While acknowledging that polynomial cost factors and the distinction between total and single-step costs were not fully explored, the authors provide a clear framework for understanding the trade-offs between different tensor network approaches. Future work, they suggest, could extend this analysis to other tensor network schemes like PEPS and MERA, as well as explore applications beyond the realm of quantum physics, and incorporate tensor constraints to refine the cost estimations.
👉 More information
🗞 Cost scaling of MPS and TTNS simulations for 2D and 3D systems with area-law entanglement
🧠 ArXiv: https://arxiv.org/abs/2601.08132
