Researchers enhance spiking neural networks with Spike Agreement Dependent Plasticity, exceeding STDP accuracy

Spiking Neural Networks offer a potentially more realistic and energy-efficient approach to artificial intelligence, but developing effective learning rules for these networks remains a significant challenge. Saptarshi Beja and colleagues at the University of Manchester, along with collaborators, now present Spike Agreement Dependent Plasticity (SADP), a new learning rule that focuses on the overall agreement between the firing patterns of connected neurons, rather than relying on precise timing of individual spikes. This approach, which utilises a measure of correlation, not only improves the accuracy of pattern recognition on image datasets like MNIST and Fashion-MNIST, but also offers a substantial speed advantage over traditional learning rules. By bridging the gap between biological realism and computational efficiency, SADP represents a promising step towards scalable and practical spiking neural networks for advanced machine learning applications.

Instead of relying on precise timing between individual spikes, SADP focuses on the agreement between entire spike trains, offering a more robust and scalable approach to learning. This innovative method employs statistical measures to determine synaptic weight updates, effectively capturing the collective behavior of neuronal groups and addressing challenges in noisy environments. The SADP update rule achieves linear-time complexity, significantly improving computational efficiency and enabling practical hardware implementation.

Scientists harnessed experimental data from iontronic organic memtransistor devices to create specialized kernels, further enhancing SADP’s performance. Experiments on image classification datasets demonstrate that SADP outperforms classical STDP in both accuracy and runtime, functioning effectively even with fluctuations in spike timings and environmental noise. Researchers recognized that traditional STDP models, focused on pairwise spike interactions, often fail to capture the complexity of biological learning. The team pioneered a shift towards a more global model of plasticity, acknowledging the importance of neuronal synchrony, the coordinated firing of neuron groups, as a key driver of synaptic change. Instead of relying on precise timing between pre- and post-synaptic spikes, SADP focuses on the agreement between entire spike trains, offering a more robust and scalable approach to learning. This innovative method employs population-level correlation metrics to determine synaptic weight updates, effectively capturing the collective behavior of neuronal groups. The SADP update rule achieves linear-time complexity, significantly improving computational efficiency and enabling practical hardware implementation through bitwise logic. Scientists harnessed experimental data from iontronic organic memtransistor devices to derive spline-based kernels, further enhancing the performance of SADP. SADP focuses on the agreement between patterns of spikes, spike trains, rather than relying on precise timing between individual spikes, offering a more robust and biologically plausible approach to synaptic learning. This innovative rule quantifies alignment between pre- and post-synaptic activity using statistical metrics, moving away from strict causality requirements and embracing correlation-sensitive learning. The team demonstrates that SADP achieves linear-time complexity, a crucial improvement over the quadratic complexity of STDP, and is particularly well-suited for efficient hardware implementation.

Experiments conducted on the challenging image classification datasets MNIST and Fashion-MNIST reveal that SADP consistently outperforms classical STDP in both accuracy and runtime across various network configurations and encoding schemes, capturing broader patterns of neural activity and making it less susceptible to noise. Furthermore, researchers incorporated device-specific data from experimental iontronic organic memtransistor devices, creating spline-based kernels that enhance SADP’s performance and compatibility with emerging neuromorphic hardware. The team demonstrates that SADP achieves competitive accuracy and faster training times compared to both Hebbian and STDP learning rules, particularly when using spline or linear kernels with rate coding schemes, offering a pathway to integrate algorithmic advances with physical devices. The findings establish SADP as a promising and efficient alternative to existing SNN training methods, offering a balance between theoretical performance and practical implementation. While the current work focuses on shallow networks, the authors acknowledge that extending SADP to deeper architectures presents a challenge, requiring mechanisms to maintain signal integrity across layers. Future research will explore adaptive kernel learning, reward-modulated variants of SADP, and attention-based agreement mechanisms to facilitate deeper and more flexible learning architectures.

👉 More information
🗞 Spike Agreement Dependent Plasticity: A scalable Bio-Inspired learning paradigm for Spiking Neural Networks
🧠 ArXiv: https://arxiv.org/abs/2508.16216

Quantum News

Quantum News

As the Official Quantum Dog (or hound) by role is to dig out the latest nuggets of quantum goodness. There is so much happening right now in the field of technology, whether AI or the march of robots. But Quantum occupies a special space. Quite literally a special space. A Hilbert space infact, haha! Here I try to provide some of the news that might be considered breaking news in the Quantum Computing space.

Latest Posts by Quantum News:

Toyota & ORCA Achieve 80% Compute Time Reduction Using Quantum Reservoir Computing

Toyota & ORCA Achieve 80% Compute Time Reduction Using Quantum Reservoir Computing

January 14, 2026
GlobalFoundries Acquires Synopsys’ Processor IP to Accelerate Physical AI

GlobalFoundries Acquires Synopsys’ Processor IP to Accelerate Physical AI

January 14, 2026
Fujitsu & Toyota Systems Accelerate Automotive Design 20x with Quantum-Inspired AI

Fujitsu & Toyota Systems Accelerate Automotive Design 20x with Quantum-Inspired AI

January 14, 2026