Explainable AI Achieves 83.5% Accuracy with Quantized Active Ingredients and Boltzmann Machines

The challenge of explaining how Artificial Intelligence systems reach decisions remains a significant hurdle, particularly within critical sectors like healthcare and finance where transparency is essential. A. M. A. S. D. Alagiyawanna, Asoka Karunananda, Thushari Silva, and A. Mahasinghe from the University of Moratuwa, Sri Lanka, present a novel framework for explainable AI, comparing Quantized Boltzmann Machines (QBMs) with Classical Boltzmann Machines (CBMs). Their research leverages computing principles to enhance transparency in decision-making processes, utilising a binarised MNIST dataset and techniques like gradient-based saliency maps and SHAP values to evaluate feature importance. Results demonstrate that QBMs not only achieve superior classification accuracy , reaching 83.5% compared to CBMs’ 54% , but also provide more focused feature attributions, suggesting a clearer identification of the key factors driving predictions. This work highlights the potential of hybrid quantum-classical models to deliver both improved performance and greater trustworthiness in AI systems.

This poses a significant challenge, particularly within high-stakes fields like healthcare and finance where understanding why a decision is made is as important as the decision itself. The research leverages principles of quantum computing integrated with classical machine learning to achieve transparency in AI decision-making.

The study involved training both QBMs and CBMs on a binarised and dimensionally reduced version of the MNIST dataset, preprocessed using Principal Component Analysis (PCA). To assess interpretability, the team employed gradient-based saliency maps to evaluate feature attributions within the QBMs, while utilising SHAP (SHapley Additive exPlanations) for the CBMs. QBMs were constructed using hybrid quantum-classical circuits incorporating strongly entangling layers, designed to create richer latent representations, whereas CBMs functioned as a classical baseline utilising contrastive divergence. This innovative approach allowed for a direct comparison of the explainability and performance characteristics of both models.

Experiments revealed that QBMs significantly outperformed CBMs in classification accuracy, achieving a score of 83.5% compared to 54% for the classical models. Furthermore, the study demonstrated that QBMs exhibited more concentrated distributions in feature attributions, quantified by an entropy value of 1.27 versus 1.39 for CBMs. This indicates that QBMs not only provide superior predictive performance but also offer a clearer identification of the most influential features driving model predictions, effectively pinpointing the key factors behind their decisions. The work establishes that quantum-classical hybrid models can simultaneously improve both accuracy and interpretability, paving the way for more trustworthy and explainable AI systems. By harnessing quantum parallelism, superposition, and entanglement, QBMs represent richer probability distributions with potentially fewer parameters than their classical counterparts. This approach enabled a comparative assessment of interpretability between quantum and classical models, crucial for high-stakes applications demanding transparency. To prepare the data, the team focused on binary classification using the MNIST dataset, initially restricting analysis to the digits zero and one due to their distinct visual characteristics.

Each 28×28 pixel image underwent normalisation, and Principal Component Analysis (PCA) was applied, reducing the 784-dimensional input to just four principal components. This dimensionality reduction facilitated efficient quantum computation and streamlined the subsequent analysis. The reduced-dimension data was calculated using the formula Z = Uk * X, where Z represents the reduced data, X is the original data, and Uk is a matrix containing the first k principal components. A key innovation lay in the quantum hidden layer design, implemented using the PennyLane software framework. Classical input features were encoded onto qubits via Angle Embedding, utilising rotation around the Y-axis (RY gate) to map the input vector into a valid quantum state.

This quantum encoding allowed the circuit to explore complex feature interactions through superposition and entanglement, potentially capturing subtle correlations missed by classical models. The quantum state evolution was mathematically defined as |ψ(x, θ)⟩= V (θ) · Uembed(x) · |0⟩⊗4, where V(θ) represents the parameterised quantum circuit and Uembed(x) is the embedding unitary. Feature extraction from the quantum layer involved measuring the expectation values of the Pauli-Z operator on each qubit, resulting in a vector ‘z’ representing the quantum-encoded input data. Experiments revealed that QBMs outperformed CBMs in classification accuracy, achieving a score of 83.5% against 54% for CBMs. Measurements confirm that QBMs not only deliver superior predictive performance but also provide a clearer understanding of the features driving those predictions.

The team quantified feature attribution distributions using entropy, finding a value of 1.27 for QBMs versus 1.39 for CBMs, indicating more concentrated and therefore more easily interpretable distributions. This concentration suggests that QBMs effectively identify the most influential features, analogous to pinpointing the “active ingredient” in a complex system. The study leveraged gradient-based saliency maps to evaluate feature attributions in QBMs, while employing SHAP (SHapley Additive exPlanations) for CBMs. This approach allowed scientists to assess how each input feature contributes to the model’s output.

Results demonstrate that the hybrid quantum-classical architecture of QBMs, incorporating strongly entangling layers, enables richer latent representations and facilitates more transparent decision-making processes. Further analysis focused on the ability of each model to highlight key features influencing predictions. The work illustrates that QBMs are capable of more effectively identifying these “active ingredients” compared to CBMs, offering a pathway towards more trustworthy and explainable AI systems. Crucially, this improvement is coupled with.

👉 More information
🗞 A Novel Approach to Explainable AI with Quantized Active Ingredients in Decision Making
🧠 ArXiv: https://arxiv.org/abs/2601.08733

Rohail T.

Rohail T.

As a quantum scientist exploring the frontiers of physics and technology. My work focuses on uncovering how quantum mechanics, computing, and emerging technologies are transforming our understanding of reality. I share research-driven insights that make complex ideas in quantum science clear, engaging, and relevant to the modern world.

Latest Posts by Rohail T.:

Machine Learning Achieves Accurate Prediction of Hubble ACS/SBC Background Variation Using 23 Years of Data

Machine Learning Achieves Accurate Prediction of Hubble ACS/SBC Background Variation Using 23 Years of Data

January 21, 2026
AI Job Anxiety Confirmed in 25 Computer Science Students, Driving Adaptive Strategies

AI Job Anxiety Confirmed in 25 Computer Science Students, Driving Adaptive Strategies

January 20, 2026
Adaptive Runtime Achieves 96.5% Optimal Performance Mitigating GIL Bottlenecks in Edge AI

Adaptive Runtime Achieves 96.5% Optimal Performance Mitigating GIL Bottlenecks in Edge AI

January 20, 2026