Researchers at the Indian Institute of Technology have developed a new optimisation principle that enhances the performance of variational algorithms by explicitly accounting for the relationships between potential solutions. The approach, detailed by Ankit Gill and Kunal Pal, addresses a recognised limitation within standard natural gradient descent methods. It employs either an ambient space construction or a first-principles method leveraging quantum state overlap to dynamically rescale step sizes during the optimisation process, effectively creating a “loss-aware” system. Benchmarking across both variational quantum circuits and classical neural networks demonstrates that, while standard natural gradient descent generally remains the most robust method, these conformal schemes offer the potential for accelerated convergence under specific, well-defined conditions.
Loss-aware optimisation via conformal schemes accelerates convergence in quantum and classical
Conformal schemes demonstrated a marked improvement in best-case convergence rates, surpassing the performance of standard natural gradient descent in particular optimisation regimes. Traditionally, achieving faster convergence necessitated the development of carefully calibrated algorithms, often requiring substantial computational resources for tuning. These new methods, however, offer a pathway towards accelerated learning without such precise calibration, representing a significant step forward in algorithmic efficiency. This advance stems from the incorporation of the geometry of the space of possible outcomes into the optimisation process, effectively constructing a “loss-aware” natural gradient. The core principle lies in recognising that the landscape of possible solutions isn’t merely defined by the parameters being adjusted, but also by the relationships, the ‘distances’, between those solutions in terms of their resulting loss or error.
Rescaling step sizes while preserving the descent direction enhances optimisation performance across both variational quantum circuits and classical neural networks, providing a nuanced approach to tackling complex algorithmic challenges. A novel optimisation approach, termed conformal schemes, outperformed standard natural gradient descent in certain scenarios. Specifically, CLA-3-QNG, a particular conformal variant, achieved superior results in variational quantum circuit examples. However, standard natural gradient descent consistently offered stronger average convergence across the majority of tested conditions. Tests conducted on classical neural networks revealed a similar trend, with CLA-3-NG surpassing not only standard Fisher information metric-based updates but also popular algorithms such as Adam and SGD-RMS. This suggests the conformal schemes are particularly effective when the loss landscape exhibits specific characteristics, such as high curvature or complex correlations between parameters. Furthermore, the technique builds upon existing gradient clipping methods, effectively decreasing learning rates in regions of high curvature to improve stability and prevent oscillations during optimisation. Current results, however, are primarily focused on low-dimensional models, limiting the ability to definitively demonstrate scalability to the very large, complex systems required for practical, real-world applications. Further research is needed to assess performance on higher-dimensional problems and to develop techniques for mitigating the computational overhead associated with calculating the conformal rescaling factors.
The natural gradient descent technique itself is rooted in the idea of adapting the learning rate based on the geometry of the parameter space. Standard gradient descent treats all directions in parameter space equally, while natural gradient descent accounts for the curvature of the parameter manifold, allowing for more efficient exploration of the solution space. The Fisher information metric, used in classical systems, and the Fubini-Study tensor, used in quantum systems, provide a means of quantifying this curvature. By preconditioning the gradient with the inverse of the Fisher information or Fubini-Study tensor, the algorithm effectively takes steps that are aligned with the natural geometry of the problem. The conformal schemes presented here extend this concept by also considering the geometry of the outcome space, adding another layer of sophistication to the optimisation process.
Mapping solution relationships onto optimisation landscapes improves algorithmic understanding
Understanding the geometry of the parameter space is crucial for improving optimisation algorithms in both quantum and classical systems, but this work reveals a previously overlooked limitation: standard techniques often remain blind to the geometry of the outcomes themselves. ‘Loss-aware’ natural gradient updates have now been introduced, effectively mapping the relationships between results onto the optimisation landscape. This allows the algorithm to better understand how changes in parameters affect the overall solution quality, leading to more informed optimisation steps. Despite not consistently outperforming established optimisation techniques, this represents a valuable advance in understanding how algorithms ‘see’ and navigate complex problems. Incorporating the geometry of solution outcomes, the results themselves, into the optimisation process is a previously under-explored aspect of both classical and quantum computing, opening up new avenues for algorithmic design.
These ‘loss-aware’ updates, while not consistently achieving superior performance to existing methods, are likely to instigate a shift towards more nuanced optimisation strategies in both classical and quantum computing. The team developed these ‘loss-aware’ natural gradient updates by carefully considering how distances between results influence the optimisation process. This involved rescaling step sizes during optimisation while meticulously maintaining the direction of descent, thereby refining the learning process without disrupting the overall optimisation trajectory. Standard natural gradient descent remains a broadly reliable and versatile technique, but these new conformal schemes offer improved convergence under specific conditions, suggesting a subtle yet potentially powerful approach to algorithmic improvement. The underlying principle is to move beyond simply minimising the immediate loss and to instead consider the broader landscape of possible solutions, allowing the algorithm to make more informed decisions about which direction to explore. Future work will focus on extending these techniques to higher-dimensional problems and exploring their potential for applications in areas such as machine learning, materials discovery, and drug design.
The researchers introduced loss-aware natural gradient updates, a new optimisation principle that incorporates the geometry of solution outcomes into the optimisation process. This means the algorithm now considers how changes to parameters affect the relationships between results, allowing for more informed steps during optimisation. While not consistently outperforming existing techniques, this work represents a valuable advance in understanding how algorithms navigate complex problems. The authors intend to extend these techniques to higher-dimensional problems and explore applications in areas such as machine learning and materials discovery.
👉 More information
🗞 Loss-aware state space geometry for quantum variational algorithms
🧠 ArXiv: https://arxiv.org/abs/2604.05627
