Researchers are tackling the significant challenge of training Spiking Neural Networks (SNNs) with high temporal precision, a key requirement for low-latency and energy-efficient artificial intelligence. Roel Koopman, Sebastian Otte, and Sander Bohté, from the Machine Learning Group at CWI and the Institute of Robotics and Cognitive Systems at the University of Lübeck, present a novel approach called SpikingGamma, which enables surrogate-gradient free and temporally precise online training. This work is particularly important because it circumvents the scaling issues and instability plaguing existing methods, allowing for the development of SNNs that can learn complex temporal patterns and are readily adaptable for implementation on dedicated hardware.
This breakthrough addresses a critical limitation in current SNN technology, the challenge of training networks with fine temporal resolution, a key factor for both rapid responsiveness and efficient hardware implementation.
The work introduces spiking neurons equipped with internal recursive memory structures, coupled with sigma-delta spike-coding, enabling direct error backpropagation without reliance on surrogate gradients. This innovative approach allows SNNs to learn intricate temporal patterns with minimal spiking activity, scaling to complex tasks and benchmarks with competitive accuracy, all while remaining unaffected by the model’s temporal resolution.
The SpikingGamma model builds upon earlier work in temporal coding and recursive neural networks, integrating adaptive memory with sigma-delta spike-coding to remove self-recurrency. This unique combination facilitates feedforward SNNs trained directly with error-backpropagation, bypassing the need for surrogate gradients typically used to overcome the discontinuity inherent in spiking processes.
Consequently, networks can be trained with arbitrary temporal precision, opening doors to more realistic and efficient neuromorphic computing. Demonstrations show the model’s ability to detect temporally separated features without external memory mechanisms, leveraging internal delays for high performance.
Notably, the research demonstrates that SpikingGamma SNNs achieve superior accuracy compared to existing online methods on established neuromorphic benchmarks including DVS Gesture, SHD, and SSC. Performance was evaluated across tasks such as delay detection and echo-location, showcasing the model’s capacity for sparse and precise temporal coding over extended timeframes.
This advancement not only provides an alternative to current recurrent SNNs reliant on surrogate gradient training, but also establishes a direct pathway for mapping SNNs onto dedicated neuromorphic hardware, promising significant gains in energy efficiency and processing speed. The team’s findings suggest a future where SNNs can operate at arbitrarily high temporal resolution, unlocking the full potential of brain-inspired computing.
SpikingGamma neuron implementation via recursive memory and sigma-delta encoding offers efficient and biologically plausible computation
SpikingGamma neurons, incorporating internal recursive memory structures and sigma-delta spike-coding, form the core of this research. Specifically, adaptive recursive memory within each neuron creates a smoothed, delayed representation of past inputs, effectively capturing temporal dynamics.
This internal state is then encoded into a spike-train using sigma-delta spike-coding, a technique analogous to pulsed sigma-delta coding in electrical circuits. The resulting spike-train is decoded at the receiving neuron, enabling direct error backpropagation without approximations. This approach bypasses the vanishing and exploding gradient problems inherent in recurrent SNNs trained with BPTT and RTRL, which typically require extensive memory and timestep complexity.
The study demonstrates that this model can learn fine temporal patterns with minimal spiking activity in an online manner. Networks were scaled to complex tasks and benchmarks, achieving competitive accuracy while remaining insensitive to the temporal resolution of the model. By removing self-recurrency and enabling direct error backpropagation, the SpikingGamma model offers a viable route for mapping SNNs to neuromorphic hardware, facilitating the development of energy-efficient and low-latency AI systems. The SpikingGamma neuron maintains a running estimate, yj, encoded by output spikes, with this estimate expressed in the same kernel basis as the input, ensuring consistency across layers.
This allows for direct passing of information to downstream synapses during training without spike decoding. Whenever the mismatch, zj = yj − yj, exceeds a threshold, a correction triggers a spike output, sj, which is added back into the estimate. Crucially, the error bypasses the spikes during training, eliminating the need for surrogate gradients.
During experiments, parameters were initialized using a uniform distribution U(−√k, √k) for synaptic weights and biases, with k = 1 input features, and bucket weights were sampled from N(0, 0.1). Adaptive thresholding was implemented, increasing the threshold in proportion to y(t −1) using the equation θ(t) = θ0 + y(t −1) · mf, where mf = θ0, to maintain training stability.
Bucket transfer rates were initialized using linearly separated values, lk ∈ linspace(Lstart, Lend, K), with Lstart = 0.1, Lend = 0.9, and αk = (lk)F, where F ∈ (0, 1), to facilitate learning long-range temporal filtering. The study highlights the potential for direct mapping of SNNs to hardware due to the model’s architecture and training methodology, offering an alternative to current recurrent SNNs reliant on surrogate gradients. This advancement circumvents the approximation errors inherent in existing online methods, enabling learning at finer temporal resolutions while effectively capturing long-range temporal dependencies.
By leveraging subthreshold dynamics, the model promotes sparse spike coding, enhancing compatibility with the communication constraints of neuromorphic hardware and bringing large-scale SNN deployment closer to reality. The SpikingGamma model offers both an alternative to recurrent SNNs trained with surrogate gradients and a streamlined pathway for mapping SNNs onto hardware platforms.
Experiments demonstrate competitive accuracy on complex tasks and benchmarks, alongside an ability to learn fine temporal patterns with minimal spiking activity. While current benchmarks, often based on rate-code models, may limit the full evaluation of sparsity gains, the authors anticipate larger benefits with datasets exhibiting richer temporal structures.
The authors acknowledge that existing large-scale benchmarks, frequently defined by rate-coded models, pose a challenge for fully assessing the efficiency gains offered by sparse temporal coding strategies. Future research could focus on developing benchmarks specifically designed to target temporal coding, thereby better revealing the potential of SpikingGamma. Furthermore, extending the modifications within SpikingGamma to feedforward network architectures, such as Transformers, appears promising for creating scalable and powerful deep SNNs for sequence learning.
👉 More information
🗞 SpikingGamma: Surrogate-Gradient Free and Temporally Precise Online Training of Spiking Neural Networks with Smoothed Delays
🧠 ArXiv: https://arxiv.org/abs/2602.01978
