An extended neurosymbolic AI framework, ODXU, improves network intrusion detection system performance across multiple metrics using deep embedded clustering, symbolic reasoning and uncertainty quantification. Transfer learning, reusing a pre-trained autoencoder and retraining modules, achieves high accuracy with reduced data requirements on separate datasets. Metamodel-based uncertainty quantification consistently outperforms score-based methods.
The escalating sophistication of cyber threats demands continual refinement of network intrusion detection systems (NIDS). Current approaches often struggle with both accuracy and the ability to explain their reasoning, hindering effective response. Researchers are now exploring the integration of neurosymbolic artificial intelligence (NSAI) – combining the pattern recognition of neural networks with the logical reasoning of symbolic AI – to address these limitations. A team led by Huynh T. T. Tran, Jacob Sander, Achraf Cohen, and Brian Jalaian at the University of West Florida, alongside Nathaniel D. Bastian of the United States Military Academy, detail their work in ‘Neurosymbolic Artificial Intelligence for Robust Network Intrusion Detection: From Scratch to Transfer Learning’. They present an extension to their ODXU framework, incorporating uncertainty quantification and a novel transfer learning strategy to improve both performance and adaptability of NIDS.
Enhanced Network Intrusion Detection via Neurosymbolic AI and Uncertainty Quantification
Network Intrusion Detection Systems (NIDS) represent a vital component of modern digital security. Recent research details an advancement to ODXU, a Neurosymbolic AI (NSAI) framework, designed to improve the robustness, interpretability, and generalisation capabilities of NIDS. NSAI integrates neural networks with symbolic reasoning – essentially combining pattern recognition with logical deduction – to create more adaptable and transparent systems.
ODXU employs deep embedded clustering for feature extraction – identifying key characteristics within network traffic – coupled with symbolic reasoning via the XGBoost algorithm, a gradient boosting framework commonly used for both classification and regression tasks. Crucially, the framework incorporates comprehensive uncertainty quantification (UQ), a method for assessing the reliability of predictions.
Performance evaluations utilising the CIC-IDS-2017 dataset demonstrate ODXU’s superiority over traditional neural network models across six key metrics, including classification accuracy and false omission rate – the proportion of actual intrusions incorrectly classified as benign traffic. Researchers compared two broad categories of UQ methods. Score-based approaches, such as Confidence Scoring and Shannon Entropy, assess uncertainty directly from the model’s output. Metamodel-based techniques, including SHAP (SHapley Additive exPlanations) values and Information Gain, utilise secondary models to explain the reasoning behind predictions. Results indicate that metamodel-based UQ consistently outperforms score-based methods across both datasets tested.
The study also addresses the potential of transfer learning – leveraging knowledge gained from one task to improve performance on another. While prevalent in fields like computer vision, transfer learning remains relatively underexplored within cybersecurity. Researchers developed a strategy to reuse a pre-trained ODXU model on a new dataset, the ACI-IoT-2023 dataset, to assess its adaptability. An ablation study – systematically removing components to assess their contribution – revealed that the optimal transfer configuration involves reusing the pre-trained autoencoder (a type of neural network used for unsupervised learning), retraining the clustering module, and fine-tuning the XGBoost classifier. This configuration outperformed traditional neural network models while utilising only 16,000 training samples – approximately 50% of the total data available.
This demonstrates the potential for efficient model deployment in dynamic network environments where labelled data may be limited. The research highlights that metamodel-based UQ methods – those utilising SHAP values and Information Gain – provide a more reliable assessment of trustworthiness than simpler score-based approaches. Understanding why a model makes a particular prediction is as important as the prediction itself.
By integrating deep learning, symbolic reasoning, and robust uncertainty quantification, ODXU offers a promising pathway towards more resilient and interpretable network intrusion detection systems, capable of adapting to the evolving cybersecurity landscape. Future work will focus on applying ODXU to other cybersecurity domains and further improving its performance and scalability.
👉 More information
🗞 Neurosymbolic Artificial Intelligence for Robust Network Intrusion Detection: From Scratch to Transfer Learning
🧠 DOI: https://doi.org/10.48550/arXiv.2506.04454
