The brain’s computational power extends beyond the strength of connections between neurons, and increasingly, researchers investigate the role of timing in neural processing. Pengfei Sun, Jascha Achterberg, and Zhe Su, alongside Dan F. M. Goodman and Danyal Akarca, demonstrate that varying the delays in signal transmission, the time it takes for a neuron to respond, significantly enhances the performance of artificial neural networks. Their work challenges the conventional focus on synaptic weights by showing that learning these delays, alongside weights, unlocks state-of-the-art results, even with dramatically simplified, low-precision networks. This discovery enables the creation of memory-efficient systems capable of maintaining high accuracy with substantially compressed networks, and suggests that temporal heterogeneity is a crucial principle for building intelligent systems that effectively process information from the dynamic, time-dependent physical world.
Axonal Delays Improve Spiking Neural Network Learning
This research investigates the role of axonal delays in spiking neural networks for learning temporal dynamics. Researchers demonstrate that delays are a crucial, often underestimated, computational resource that significantly enhances performance, particularly on tasks with complex temporal dependencies. Delays enhance temporal computation by improving accuracy on complex datasets and acting as a temporal reservoir, allowing networks to perform well even when only delays are optimized. The study reveals that long delays are particularly important for capturing long-range temporal dependencies, and the distribution of delays, with a significant range of values, is crucial for aligning temporally separated features.
Furthermore, the research demonstrates that delays effectively compensate for reduced weight precision, with a metric quantifying how delays contribute to overall accuracy, sometimes exceeding 50% of the required bit budget. Experiments utilized datasets with varying temporal complexity, including speech and event-camera data, and employed single-layer spiking neural networks trained using surrogate gradient learning. This work argues that axonal delays are not merely a biological detail but a fundamental computational resource that can significantly enhance the performance and energy efficiency of spiking neural networks, particularly for tasks requiring the processing of temporal information. The findings have implications for the design of future neuromorphic computing systems.
Axonal Delays Enhance Spiking Neural Network Performance
This study pioneers a novel approach to spiking neural networks, incorporating the modification of axonal delays as a key computational parameter. Researchers hypothesized that heterogeneity in these delays could enhance performance on temporally complex tasks, particularly in embodied systems where timing is crucial. They engineered a spiking neural network model simulating neuronal dynamics and incorporating learnable axonal delays alongside synaptic weights, optimizing this delay through backpropagation. The team developed an efficient method to compute gradients, approximating the change in spike output with respect to the delay using the difference between spike trains at consecutive time steps.
They also implemented a quantization scheme, reducing the precision of both weights and delays, achieving state-of-the-art accuracy despite aggressive compression of weight precision to just 1. 58 effective bits. Performance was evaluated using a Spikemax loss function, accurately classifying inputs and assessing the network’s ability to process temporal information. Experiments across diverse datasets, including speech recognition and event-camera data, consistently demonstrated that incorporating learnable delays significantly improved performance, particularly on tasks requiring precise timing. The study reveals that task performance depends on task-appropriate delay distributions, with temporally-complex tasks requiring longer delays, and highlights the potential for creating memory-efficient, high-performance intelligent systems inspired by the brain’s temporal dynamics.
Axonal Delays Boost Spiking Neural Network Performance
Scientists demonstrate that incorporating axonal delays into spiking neural networks significantly enhances performance on temporally complex tasks, achieving results comparable to current state-of-the-art methods. The research team trained networks to modify both synaptic weights and axonal delays, revealing a substantial performance increase across four datasets: Spiking Heidelberg Digits, Spiking Speech Commands, Neuromorphic TIDIGITS, and DVS Gesture. These datasets vary in temporal complexity, ranging from auditory stimuli requiring temporal integration to event-driven visual gestures. Experiments show that learning delays, in addition to weights, consistently improved peak attained task performance across all tested datasets, confirming recent evidence supporting the benefits of incorporating delays into neural networks.
The team implemented axonal delays as a trainable parameter, resulting in a more efficient scaling of parameters compared to connection weights. Notably, the study reveals that networks can maintain high performance even with extremely imprecise weights, achieving 1. 58-bit precision while still benefiting from learned delays. Further analysis through ablation studies demonstrates that task performance depends on task-appropriate delay distributions, with temporally complex tasks requiring longer delays, analogous to long-range connectivity. The research also characterizes a trade-off between delays and time constants, revealing how these parameters adaptively interact to shape resource efficiency within the learned solutions. These findings suggest that temporal heterogeneity represents a fundamental computational principle for embodied artificial intelligence, unlocking new modes of computation.
Learned Axonal Delays Boost Spiking Network Performance
This research demonstrates that incorporating axonal delay as a trainable parameter in spiking neural networks significantly enhances performance on temporally complex tasks. The team successfully trained networks to not only adjust synaptic weights but also to learn optimal delays between neurons, achieving state-of-the-art results even with highly compressed, low-precision weights. This ability to maintain accuracy with reduced weight precision represents a substantial advance, enabling more memory-efficient neural network solutions. The findings reveal an adaptive trade-off between learned delays and time constants, and highlight the importance of delay distributions tailored to the specific demands of a task, with more complex tasks benefiting from longer delays. By demonstrating that temporal heterogeneity can be exploited for efficient computation, this work suggests a biologically plausible approach to neural network design, particularly for applications involving time-series data and embodied intelligence.
👉 More information
🗞 Exploiting heterogeneous delays for efficient computation in low-bit neural networks
🧠 ArXiv: https://arxiv.org/abs/2510.27434
