AI Sharpens Telescope Images for Clearer Cosmos

Scientists are developing improved methods for modelling the Point Spread Function (PSF), a critical component in cosmological weak-lensing surveys. Dayana Andrea Henao Arbeláez from Universidad de Antioquia, Pierre-François Léget from the Department of Astrophysical Sciences at Princeton University, and Andrés A. Plazas Malagón from SLAC National Accelerator Laboratory, in collaboration with researchers at Universidad de Antioquia, Princeton University, and SLAC National Accelerator Laboratory, present a new data-driven model utilising deep learning that surpasses the accuracy of current state-of-the-art techniques like PIFF. Accurate PSF modelling is essential because it directly impacts measurements of cosmic shear, a key cosmological probe used to understand the expansion of the Universe and the growth of large-scale structure for dark energy studies. This research introduces a novel framework, trained on data from the Hyper Suprime-Cam, which captures spatial coherence across the telescope’s field of view and achieves a significant reduction in reconstruction error compared to existing methods, paving the way for its implementation in the Vera C. Rubin Observatory’s Legacy Survey of Space and Time (LSST) Science Pipelines.

A new artificial intelligence technique is sharpening images from space telescopes, promising a clearer view of the distant Universe. Accurate modelling of telescope optics is essential for understanding dark energy and the expansion of the cosmos. This work introduces an artificial intelligence-based framework that surpasses existing methods in its ability to account for variations across the telescope’s field of view, promising more precise cosmological measurements.

Current techniques often struggle with maintaining spatial coherence across large areas of the sky, limiting the precision of weak lensing analyses. This refined model lays the groundwork for future large-scale surveys, notably the Vera C. By minimising these errors, scientists can reduce uncertainties in their measurements of cosmic shear, leading to a more refined understanding of dark energy and the evolution of the Universe. At a time when cosmological mysteries abound, this advancement offers a powerful new tool for probing the fundamental nature of reality.

For decades, cosmology has relied on increasingly sophisticated observations to unravel the Universe’s history, from the debates surrounding its size in the 1920s to the recent discovery of accelerating expansion. Unlike earlier methods, this AI-driven approach offers a data-driven solution to a long-standing problem in astronomical imaging. Once fully integrated into the LSST Science Pipelines, this technology will enable astronomers to map the distribution of dark matter with unprecedented precision, potentially revealing new insights into its composition and behaviour. Lower error rates translate directly into more precise cosmological measurements, allowing astronomers to better understand the expansion history of the Universe and the growth of large-scale structure.

Reducing this error is a complex undertaking, demanding advanced modelling techniques. The research introduces an autoencoder framework, trained on stellar images from the Hyper Suprime-Cam (HSC) telescope, combined with a Gaussian process for interpolation across the telescope’s field of view. By capturing systematic variations across the focal plane, the model demonstrably outperforms the previously used PIFF method.

Specifically, the new framework achieves a reconstruction error of 3.4 × 10−6, a measurable improvement over PIFF’s error of 3.7 × 10−6. Once implemented, this advancement promises to refine the accuracy of weak lensing analyses. The dataset used for training comprised 2,787,956 stars observed over 404 visits, providing ample data for the autoencoder to learn the complex patterns of the PSF.

Each star’s data included 17 features, encompassing observed images, PIFF-generated PSF models, standard deviations, ellipticity components, spatial positions, brightness, detector ID, visit number, and filter band. By incorporating these features, the model effectively captures the PSF’s behaviour across the telescope’s focal plane, ultimately leading to the observed reduction in reconstruction error. The PSF characterizes how a point source, like a star, appears after light travels through atmospheric turbulence and telescope optics, essentially capturing a “blurred fingerprint” of the entire imaging system. Initially, stellar images were obtained using the Hyper Suprime-Cam (HSC) on the Subaru Telescope, providing the raw data for training the autoencoder. Once trained, this autoencoder can reconstruct the PSF from limited input data, offering a computationally efficient approach. Beyond the autoencoder, a Gaussian process was integrated into the model to interpolate the PSF across the entire field of view.

This is a departure from earlier techniques that treated different parts of the image separately, losing vital spatial information. Now, with a reconstruction error reduced to a fraction of previous levels, the potential for more precise measurements of cosmic shear, the distortion of distant galaxies caused by dark matter, is within reach. For years, a subtle blurring has plagued astronomical surveys attempting to map the distribution of dark matter, a substance making up approximately 85% of the Universe’s mass. A new approach, employing artificial intelligence, has demonstrably reduced these errors. Limitations remain. The model was trained on data from a specific telescope and camera, meaning adaptation to other instruments will require further work.

The true test will be its performance when applied to the vast datasets generated by the Rubin Observatory, where subtle biases can become amplified. Once this is proven, the next step will likely involve combining this technique with other error-reduction strategies, pushing the boundaries of what’s possible in cosmological observation and potentially revealing new insights into the nature of dark matter and dark energy.

👉 More information
🗞 Deep Learning for Point Spread Function Modeling in Cosmology
🧠 ArXiv: https://arxiv.org/abs/2602.15780

Rohail T.

Rohail T.

As a quantum scientist exploring the frontiers of physics and technology. My work focuses on uncovering how quantum mechanics, computing, and emerging technologies are transforming our understanding of reality. I share research-driven insights that make complex ideas in quantum science clear, engaging, and relevant to the modern world.

Latest Posts by Rohail T.:

Accurate Quantum Sensing Now Accounts for Real-World Limitations

Accurate Quantum Sensing Now Accounts for Real-World Limitations

March 13, 2026
Quantum Error Correction Gains a Clearer Building Mechanism for Robust Codes

Quantum Error Correction Gains a Clearer Building Mechanism for Robust Codes

March 10, 2026

Protected: Models Achieve Reliable Accuracy and Exploit Atomic Interactions Efficiently

March 3, 2026