The complex issue of learning local quantum Hamiltonians in quantum physics can be improved by using Chebyshev approximation. This approach transforms the problem into a polynomial optimization problem (POP) with semidefinite programming (SDP) relaxations with bounds on their dimensions. The main contribution is a new flat polynomial approximation to the exponential function based on the Chebyshev expansion. This shows that SDP relaxations of learning local quantum Hamiltonians can be solved in polynomial time under mild assumptions, significantly improving over previous methods. Chebyshev series and Bessel functions also contribute to the improvement by providing a uniform approximation of an analytic function.
What is the Problem of Learning Local Quantum Hamiltonians?
The problem of learning local quantum Hamiltonians is a complex issue in quantum physics. It involves understanding the behavior of quantum systems, which is nontrivial due to its noncommutativity and nonconvexity. The problem is approached by approximating the matrix exponential or matrix logarithm with polynomials and transforming the problem into a polynomial optimization problem (POP). This can be either matrix-valued or operator-valued. The moment/SOS hierarchy of semidefinite programming (SDP) relaxations is then obtained.
Recent papers in Theoretical Computer Science have considered using Taylor expansion to obtain an approximation of the matrix exponential using polynomials. However, these methods have limitations, such as the omission of the study of bit complexity, which is crucial in proving the polynomial run time of an algorithm.
In the physics community, the use of Magnus expansion and moment/SOS relaxations in learning models of closed and open quantum systems from estimates of arbitrary states has been developed. This approach has demonstrated practical performance. However, the use of Chebyshev expansion is numerically superior to the Taylor expansion.
How Does Chebyshev Approximation Improve the Learning of Quantum Hamiltonians?
The use of Chebyshev approximation leads to instances of a polynomial optimization problem (POP) for which the SDP relaxations have bounds on their dimensions. The main technical contribution of this approach is a new flat polynomial approximation to the exponential function based on the Chebyshev expansion.
This new flat approximation shows that SDP relaxations of learning local quantum Hamiltonians given copies of their Gibbs state can be solved in polynomial time under mild assumptions. This is a significant improvement over previous methods, which were restricted to the case of high-temperature Gibbs states or had bounds on their dimensions without any restriction on the temperature of the Gibbs states.
What is a Flat Approximation of the Exponential Using Chebyshev Series?
Polynomial approximations of matrix exponential are key technical tools in the analysis of several optimization problems in quantum information theory. A flat approximation of the exponential is a polynomial that is called a ǫηK-approximation if it satisfies certain conditions.
Unfortunately, the Taylor, Chebyshev, and QSVT-style series fail to provide a flat approximation of the exponential directly. However, a flat approximation can be constructed based on the Taylor series. Motivated by the numerical advantages that a Chebyshev expansion provides over Taylor’s, two novel constructions of a polynomial expansion based on Chebyshev’s flat approximation have been presented.
What are the Approximation Properties of Chebyshev Series and Bessel Functions?
Chebyshev series and Bessel functions are relevant in the context of the Chebyshev series approximation of the exponential. The Chebyshev polynomials of the first kind satisfy certain recurrence relations. A truncation of the Chebyshev series provides a uniform approximation of an analytic function on -11, provided that it possesses a complex analytical continuation.
The modified Bessel function is denoted by Ivz and satisfies certain conditions. The Chebyshev expansion given by a certain formula exhibits better approximation properties than Taylor’s. The flatness property hinges on a recent result, which offers an algebraic criterion that can be used to show that the Chebyshev series truncation provides upper and lower bounds for the exponential for x > 1.
How Can the Learning of Quantum Hamiltonians be Improved?
The learning of quantum Hamiltonians can be improved by using Chebyshev approximation. This approach leads to instances of a polynomial optimization problem (POP) for which the SDP relaxations have bounds on their dimensions. The main technical contribution of this approach is a new flat polynomial approximation to the exponential function based on the Chebyshev expansion.
This new flat approximation shows that SDP relaxations of learning local quantum Hamiltonians given copies of their Gibbs state can be solved in polynomial time under mild assumptions. This is a significant improvement over previous methods, which were restricted to the case of high-temperature Gibbs states or had bounds on their dimensions without any restriction on the temperature of the Gibbs states.
The use of the Chebyshev series and Bessel functions in approximating the exponential also contributes to improving the learning of quantum Hamiltonians. These mathematical tools provide a uniform approximation of an analytic function, which is crucial in analyzing optimization problems in quantum information theory.
The article named: “Learning quantum Hamiltonians at any temperature in polynomial time with
Chebyshev and bit complexity”, was published in arXiv (Cornell University) on 2024-02-08, . The authors are Aleš Wodecki and Jakub Mareček. Find more at https://doi.org/10.48550/arxiv.2402.05552
