Determining the fundamental properties of stars, such as temperature and surface gravity, presents a significant computational challenge when analysing the vast datasets produced by modern spectroscopic surveys, and researchers are continually seeking ways to improve efficiency. Jun-Chao Liang, Yin-Bi Li, and A-Li Luo, along with colleagues from their institutions, address this problem by presenting a new, scalable framework for stellar parameter inference built upon the existing LAMOST Atmospheric Parameter Pipeline. This team developed a modular Python-based system, incorporating both CPU optimisation and crucially, GPU acceleration, to dramatically reduce processing time while maintaining accuracy. The resulting framework achieves a substantial speedup, processing ten million spectra in just seven hours using a GPU, compared to 84 hours on a standard CPU, and importantly, delivers results that align with established pipelines and repeat observations, offering a powerful tool for analysing data from current and future large-scale surveys.
Rather than simply translating the original code, the team fundamentally refactored the pipeline into two complementary modules, significantly enhancing both efficiency and scalability. LASP-CurveFit represents a new implementation of the core fitting procedure, designed for CPU execution, and preserves the legacy logic while incorporating improvements to data input/output and multithreaded processing. Complementing this, LASP-Adam-GPU pioneers a GPU-accelerated approach to parameter inference, introducing grouped optimization, a technique that constructs a single residual function across multiple spectra simultaneously.
This innovative method enables high-throughput analysis, dramatically reducing processing time and allowing for the simultaneous estimation of multiple stellar parameters., When applied to a dataset of 10 million LAMOST spectra, the framework achieved a substantial reduction in runtime, decreasing from 84 hours on a standard CPU platform to just 7 hours utilizing an NVIDIA A100 GPU. The study rigorously validated the framework’s performance by comparing inferred parameter errors with variations observed in repeat measurements, and found strong agreement. Furthermore, when applied to data from the DESI DR1 survey, the framework’s estimates of effective temperatures and surface gravities demonstrated superior consistency with those derived from the APOGEE survey, particularly for cool giant stars.,.
Py-LASP Accelerates Stellar Parameter Inference
Scientists have developed a new Python-based framework, Py-LASP, to significantly enhance the efficiency and scalability of stellar parameter inference from large spectroscopic datasets. The team implemented LASP-CurveFit, a CPU-based fitting procedure, and LASP-Adam-GPU, a GPU-accelerated method leveraging grouped optimization techniques to analyze spectra at an unprecedented rate. Experiments applying the framework to a dataset of 10 million LAMOST spectra demonstrate a substantial reduction in processing time, decreasing runtime from 84 hours to just 7 hours utilizing an NVIDIA A100 GPU., Importantly, the results obtained with Py-LASP remain consistent with those from the original pipeline, confirming the accuracy of the new approach. Detailed analysis reveals that the inferred parameter errors closely align with variations observed in repeat measurements, and are more conservative than previous empirical errors. When applied to data from the DESI DR1 survey, the framework’s measurements of effective temperatures and surface gravities demonstrate improved agreement with values obtained from the APOGEE survey, particularly for cool giant stars.,.
LASP Framework Delivers Scalable Stellar Analysis
This research presents a new Python framework, built upon existing work, for determining stellar parameters from large spectroscopic datasets, significantly improving both efficiency and scalability. The team developed two complementary methods, LASP-CurveFit and LASP-Adam-GPU, to analyse stellar spectra, achieving substantial reductions in processing time, down to seven hours for ten million spectra when utilising a high-performance GPU. Importantly, the results obtained with this framework closely match those from the original implementation, demonstrating consistent accuracy in parameter estimation. The framework’s performance extends to cross-survey comparisons, with effective temperatures and surface gravities aligning well with data from the APOGEE project, particularly for cooler giant stars.
Error estimations generated by the new system also demonstrate strong agreement with variations observed in repeated measurements, and outperform previous empirical correction methods. The authors acknowledge that the GPU-accelerated method exhibits increased sensitivity to initial temperature values for very hot stars, and recommend using appropriate starting points for optimal results, and also note a potential bias in radial velocity error estimations, suggesting this warrants further study. The code and resulting data catalog are publicly available, facilitating wider use and future research in stellar astrophysics.
👉 More information
🗞 Scalable Stellar Parameter Inference Using Python-based LASP: From CPU Optimization to GPU Acceleration
🧠 ArXiv: https://arxiv.org/abs/2512.24840
