Explainable AI Advances Machine Learning Reliability for Industrial Cyber-Physical Systems

Industrial Cyber-Physical Systems (CPS) represent vital infrastructure, demanding unwavering reliability from both economic and security standpoints. Annemarie Jutte from Saxion University of Applied Sciences and University of Twente, alongside Uraz Odyurt from University of Twente, et al., address a growing concern: the ‘black box’ nature of machine learning models increasingly deployed within these critical systems. Their research tackles the challenge of ensuring predictable behaviour in deep learning applications for CPS, recognising that opaque models can fail unexpectedly when faced with new data. By applying Explainable AI (XAI) techniques , specifically SHAP values to time-series decomposition , the team uncovers crucial insights into model reasoning and identifies a lack of sufficient contextual information during training. This innovative approach, informed by XAI findings, demonstrably improves predictive performance by optimising the data used for model development, offering a significant step towards more trustworthy and robust industrial AI systems.

This work establishes a method for uncovering model reasoning, enabling a more thorough evaluation of behaviour and preventing unexpected outcomes on unseen data. By providing the model with a broader view of the time-series data, informed by the XAI findings, the researchers were able to demonstrably improve overall model performance. The study reveals that incorporating XAI into the model development lifecycle allows for empirical adjustments based on observed behaviour, moving beyond traditional design-space search methodologies.

The team hypothesised that a wider data instance could better capture certain component states, while a narrower instance might improve focus on abrupt variations, and their experiments confirmed this potential. Furthermore, the study provides publicly available data and code, facilitating reproducibility and encouraging further research in this critical area. The use-case serves as a practical demonstration of achievable improvements, and the approach is readily extensible to enhance other targeted machine learning models. The work opens avenues for more robust and reliable ML integration into industrial CPS, ensuring safer and more efficient operation of these sensitive infrastructures. By prioritising reliability over computational cost, the researchers underscore the importance of XAI in applications where safety and economic security are paramount.

SHAP-Driven FDI Model Tuning with Time-Series Decomposition

Experiments centred around a Convolutional Neural Network (CNN) implemented as a Fault Detection and Identification (FDI) solution, where the team generated and incorporated SHAP values for fine-tuning the model. Scientists decomposed time-series pseudo-signals and subsequently adjusted data formatting based on the model’s response to these signal components, a novel approach to informed model development. Data instances were formatted with varying window sizes, the study explored the effects of these adjustments on model performance, guided by the XAI findings regarding contextual information sufficiency. Specifically, the team assessed how wider windows could better capture component states and how narrower windows might enhance focus on abrupt variations within the time-series data.

This approach enables a deeper understanding of the relationships the model relies upon, moving beyond simple prediction accuracy metrics to evaluate generalisation capabilities and detect spurious correlations. The study meticulously documented the entire process, providing publicly available data and code for reproduction, fostering transparency and enabling further research in this domain. By connecting XAI insights directly to model improvement, the work demonstrates a significant advancement in the development of robust and reliable ML solutions for industrial CPS applications.

XAI Improves CPS Model Reliability via Context

The team measured the impact of varying data instance window sizes, informed by XAI findings, on model performance, demonstrating a clear link between contextual data and accuracy. Through detailed analysis using SHAP values, the study observed evidence of insufficient contextual information being provided to the ML models during their initial training phase. This adjustment directly addressed the identified deficiency and led to measurable improvements in predictive capabilities. Results demonstrate that this XAI-informed approach allows for empirical adjustments to training hyperparameters, moving beyond traditional design-space search methodologies. The breakthrough delivers a publicly available dataset and code for reproduction, enabling further research and validation of the findings. This work provides a demonstration of achievable improvements in ML model reliability for industrial CPS, offering a pathway.

👉 More information
🗞 Explainable AI to Improve Machine Learning Reliability for Industrial Cyber-Physical Systems
🧠 ArXiv: https://arxiv.org/abs/2601.16074

Rohail T.

Rohail T.

As a quantum scientist exploring the frontiers of physics and technology. My work focuses on uncovering how quantum mechanics, computing, and emerging technologies are transforming our understanding of reality. I share research-driven insights that make complex ideas in quantum science clear, engaging, and relevant to the modern world.

Latest Posts by Rohail T.:

Deepseek-R1-Distill-Llama-70b Achieves 12-Dataset Benchmark for Causal Discovery

Deepseek-R1-Distill-Llama-70b Achieves 12-Dataset Benchmark for Causal Discovery

January 27, 2026
Large Language Models Achieve Fine-Grained Opinion Analysis with Reduced Human Effort

Large Language Models Achieve Fine-Grained Opinion Analysis with Reduced Human Effort

January 27, 2026
Qsmri Achieves Noninvasive Detection of Neuronal Activity in Human Brains

Qsmri Achieves Noninvasive Detection of Neuronal Activity in Human Brains

January 27, 2026