Shows Peft-Muts Improves Remaining Useful Life Prediction with Cross-Domain Time Series Data

Scientists are tackling the longstanding challenge of accurately predicting remaining useful life (RUL) in machinery, often hampered by limited degradation data. En Fu, Yanyan Hu, and Changhua Hu, alongside Zengwang Jin et al. from Anhui University and the University of Science and Technology Beijing, present PEFT-MuTS, a novel parameter-efficient fine-tuning framework that leverages cross-domain time series data. This research is significant because it challenges the conventional reliance on similar historical data for RUL prediction, demonstrating substantial improvements through pre-training with large, diverse datasets. By combining independent feature tuning with a meta-variable fusion mechanism, PEFT-MuTS effectively exploits multivariate relationships and achieves accurate predictions with remarkably little target equipment data, less than one percent in their experiments, outperforming existing supervised methods.

Cross-domain pre-training enables accurate remaining useful life prediction with minimal data, significantly improving performance

Scientists have developed a novel framework, PEFT-MuTS, to overcome limitations in data-driven remaining useful life (RUL) prediction, a crucial element of Prognostics and Health Management. The research addresses the longstanding challenge of requiring extensive degradation data, a significant barrier to applying deep learning in real-world industrial settings.
This study demonstrates that accurate RUL prediction is achievable even with less than 1% of samples from the target equipment, a substantial improvement over conventional methods. The team achieved this breakthrough by constructing a Parameter-Efficient Fine-Tuning framework built on cross-domain pre-trained time-series representation models.

Contrary to the prevailing view that knowledge transfer requires similar devices, the researchers prove that pre-training with large-scale, diverse time-series datasets yields substantial benefits. They developed an independent feature tuning network and a meta-variable-based low-rank multivariate fusion mechanism to enable the pre-trained model to effectively utilise multivariate relationships within degradation data for RUL prediction.

This work introduces a zero-initialized regressor, designed to stabilise the fine-tuning process under few-shot conditions, further enhancing the model’s performance with limited data. Experiments conducted on both aero-engine and industrial bearing datasets demonstrate that PEFT-MuTS significantly outperforms existing supervised and few-shot approaches.

The method markedly reduces the amount of data needed to achieve high predictive accuracy, opening new possibilities for RUL prediction in scenarios where data acquisition is costly or challenging. The research establishes a new paradigm for RUL prediction, moving beyond the reliance on homogeneous equipment data and embracing the power of cross-domain knowledge transfer.

This innovation has the potential to revolutionise maintenance strategies across various industries, enabling proactive interventions and minimising downtime. The team has made their code publicly available at https://github.com/fuen1590/PEFT-MuTS, facilitating further research and application of this promising technology.

Cross-domain pre-training enhances parameter-efficient remaining useful life prediction performance

Scientists developed PEFT-MuTS, a Parameter-Efficient Fine-Tuning framework for Remaining Useful Life (RUL) prediction, addressing limitations imposed by scarce degradation data. This study pioneered a method leveraging cross-domain pre-trained time-series representation models, challenging the conventional reliance on data from identical or similar equipment.

Researchers demonstrated that substantial improvements in RUL prediction are achievable through pre-training with large-scale, cross-domain time-series datasets, rather than limiting knowledge transfer to similar devices. The team engineered an independent feature tuning network and a meta-variable-based low rank multivariate fusion mechanism to fully exploit multivariate relationships within degradation data.

This innovative approach enables the pre-trained univariate time-series representation model to effectively process complex, multi-sensor data for downstream RUL prediction. Furthermore, scientists introduced a zero-initialized regressor, designed to stabilize the fine-tuning process specifically under few-shot learning conditions.

Experiments employed both aero-engine and industrial bearing datasets to rigorously evaluate the performance of PEFT-MuTS. The method achieved effective RUL prediction utilising less than 1% of samples from the target equipment, demonstrating a significant reduction in data requirements. Results showed PEFT-MuTS substantially outperformed conventional supervised and few-shot learning approaches, achieving markedly higher predictive accuracy.

The system delivers a novel solution for RUL prediction in data-scarce environments. Researchers harnessed a self-supervised pre-training process to learn generalizable temporal feature representations from extensive time-series data. This technique reveals the potential of time-series representation models to improve prediction accuracy and robustness, particularly when limited historical data is available. The approach decouples transfer learning from the need for equipment-specific degradation data, facilitating knowledge transfer across diverse domains and broadening the applicability of RUL prediction in real-world industrial settings.

Cross-domain pre-training enhances remaining useful life prediction with limited target data by leveraging knowledge from related tasks

Scientists have developed PEFT-MuTS, a Parameter-Efficient Fine-Tuning framework for remaining useful life (RUL) prediction, achieving substantial benefits through pre-training with large-scale cross-domain time series datasets. The research demonstrates that effective RUL prediction is possible even with access to less than 1% of samples from the target equipment, markedly reducing the data required for high predictive accuracy.

Experiments were conducted on both aero-engine and industrial bearing datasets to validate the method’s performance. The team measured RUL prediction accuracy using the C-MAPSS dataset, specifically the FD002 and FD004 subsets, which incorporate six operating conditions and 14 commonly used variables exhibiting degradation trends.

For the Bearing dataset, all three operating condition subsets, OP-A, OP-B, and OP-C, were utilized, each representing different speeds and loads with vibration signals recorded in horizontal and vertical directions. RUL labels were normalized to a range of [0, 1], where 1 indicates a healthy state and 0 signifies failure.

Results demonstrate the effectiveness of the zero-initialized regressor in stabilizing the fine-tuning process under data-scarce conditions. Analysis of gradient variance revealed that it is positively correlated with input feature variance, label variance, and the regressor’s initialization variance; the zero-initialized regressor minimizes initial gradient variance by eliminating the contribution of initialization variance.

This approach significantly reduces instability compared to training from scratch, as pre-trained models produce more discriminative features early in fine-tuning. To construct small-sample datasets, a sliding window approach was applied, generating time series segments with RUL assigned at the final step; for C-MAPSS, a window size of 30 with a step of 15 was used, while for the Bearing dataset, the window size was 1024 with a step of 32768.

Three levels of sampling were then implemented: device retention (p1), RUL coverage (p2), and global data retention (p3), creating datasets with varying scales and simulating real-world conditions. Table 1 details the resulting small-sample training sets, showing data proportions and the total number of samples for each stage of degradation.

Cross-domain transfer learning improves remaining useful life prediction accuracy by leveraging knowledge from similar systems

Scientists have developed a new parameter-efficient fine-tuning (PEFT) framework, named PEFT-MuTS, for predicting the remaining useful life (RUL) of equipment. This framework integrates cross-domain representation learning with PEFT to enable accurate RUL prediction even with limited data from the target equipment.

The core innovation lies in leveraging large-scale, cross-domain time-series datasets during a pre-training process, challenging the conventional reliance on similar historical data. PEFT-MuTS employs an independent feature tuning network and a meta-variable-based low-rank feature fusion mechanism to effectively utilise multivariate relationships within degradation data.

A zero-initialized regressor further enhances the stability of the fine-tuning process. Experiments conducted on aero-engine and industrial bearing datasets demonstrate that PEFT-MuTS significantly outperforms existing supervised and few-shot approaches, achieving effective RUL prediction with less than 1% of the target equipment’s data.

The authors acknowledge that balancing the size of low-rank parameters is crucial, as excessively large values can lead to overfitting, while smaller values may limit performance. This research establishes a new paradigm for few-shot RUL prediction by shifting the focus from identifying similar degradation patterns to leveraging generalisable temporal priors.

Future work could investigate quantifying the transferability of data from different source domains to improve the interpretability of temporal knowledge. Combining cross-domain samples with data from similar equipment also presents a potential avenue for enhancing prediction accuracy when available.

👉 More information
🗞 PEFT-MuTS: A Multivariate Parameter-Efficient Fine-Tuning Framework for Remaining Useful Life Prediction based on Cross-domain Time Series Representation Model
🧠 ArXiv: https://arxiv.org/abs/2601.22631

Rohail T.

Rohail T.

As a quantum scientist exploring the frontiers of physics and technology. My work focuses on uncovering how quantum mechanics, computing, and emerging technologies are transforming our understanding of reality. I share research-driven insights that make complex ideas in quantum science clear, engaging, and relevant to the modern world.

Latest Posts by Rohail T.:

Abstract geometric manifold with quantum gate symbols represented as smooth curved paths across surface

Quantum Gates Mapped to Predictable Geometric Space

February 26, 2026
Logical qubit core protected by layered stabilizer shields, error ripples absorbed at outer boundary, central state glowing steadily with high coherence

Quantum Error Framework Boosts Logical State Fidelity

February 26, 2026
Qubit array with fewer measurement beams scanning across it, bright efficient readout node at center, minimal energy pulses compared to background

Quantum Computers Cut Measurement Costs with New Method

February 25, 2026