Researchers develop FeNN, a programmable RISC-V vector processor for Field Programmable Gate Arrays (FPGAs), to efficiently simulate Spiking Neural Networks (SNNs). FeNN achieves faster SNN classification than both embedded GPUs and the Loihi neuromorphic system, utilising stochastic rounding and saturation for precision with minimal hardware resources.
The pursuit of energy-efficient artificial intelligence increasingly focuses on biologically inspired neural networks, specifically Spiking Neural Networks (SNNs), which mimic the pulsed communication of neurons more closely than conventional Artificial Neural Networks (ANNs). Current hardware architectures, optimised for the dense matrix operations characteristic of ANNs, struggle with the sparse, event-driven nature of SNNs. Researchers Zainab Aizaz, James C. Knight, and Thomas Nowotny, all from the School of Engineering and Informatics at the University of Sussex, address this challenge in their work, “FeNN: A RISC-V vector processor for Spiking Neural Network acceleration”. They present a novel, programmable vector processor, FeNN, designed to efficiently simulate SNNs on Field Programmable Gate Arrays (FPGAs), demonstrating performance gains over both embedded GPUs and the neuromorphic Loihi system, and offering a flexible solution applicable across a range of computing platforms.
Spiking neural networks (SNNs) offer a potential route to substantially reduce the energy consumption of artificial intelligence systems, yet current mainstream accelerators, such as graphics processing units (GPUs) and tensor processing units (TPUs), are optimised for the high arithmetic intensity typical of standard artificial neural networks (ANNs). This creates a mismatch, as conventional hardware often struggles to efficiently simulate SNNs, which operate on principles closer to biological neurons. SNNs communicate using discrete spikes, rather than continuous values, resulting in sparse and event-driven computation. Field programmable gate arrays (FPGAs) are emerging as a promising platform, possessing strengths in applications with lower arithmetic intensity, alongside high off-chip memory bandwidth and substantial on-chip memory capacity.
FeNN, a novel RISC-V-based soft vector processor, addresses these limitations by accelerating SNN simulation on FPGAs. Unlike dedicated neuromorphic hardware, which often prioritises energy efficiency at the expense of flexibility, FeNN prioritises full programmability, enabling integration with existing applications and deployment across a range of platforms, from resource-constrained edge devices to cloud infrastructure. A ‘soft processor’ is implemented using the configurable logic of the FPGA, rather than being a fixed hardware component. RISC-V is an open-standard instruction set architecture, allowing for customisation and optimisation.
The core innovation of FeNN lies in its ability to efficiently manage the sparse and event-driven nature of SNNs, contrasting with the dense matrix operations common in GPUs. This is achieved through a combination of architectural optimisations and algorithmic techniques, carefully balancing performance, energy efficiency, and resource utilisation. By leveraging the inherent parallelism of FPGAs and employing a custom instruction set tailored to SNN operations, FeNN reduces the computational overhead associated with simulating these networks, enabling faster and more energy-efficient processing.
A crucial aspect of FeNN’s design is the implementation of stochastic rounding and optimisation strategies to further maximise performance and efficiency. Stochastic rounding is a technique used to reduce the precision of calculations, reducing computational complexity and energy consumption. The demonstrated performance and adaptability of FeNN position it as a compelling alternative to both traditional and dedicated neuromorphic approaches, offering a unique combination of flexibility, efficiency, and programmability.
Researchers anticipate FeNN will facilitate wider adoption of SNNs, enabling a new generation of AI applications that are both powerful and energy-efficient. The ability to seamlessly integrate SNNs with existing software stacks and deploy them across diverse platforms will be key to unlocking their full potential.
Future work will focus on developing a comprehensive software toolchain for FeNN, including a compiler, debugger, and performance profiler, simplifying the development and deployment of SNN-based applications. Further investigation into advanced memory management techniques is also planned, aiming to reduce energy consumption and improve performance.
The development of FeNN represents an advancement in neuromorphic computing, demonstrating the viability of FPGAs as a platform for accelerating SNNs. By carefully balancing performance, energy efficiency, and programmability, a platform has been created that empowers researchers and developers to explore the full potential of SNNs, paving the way for energy-efficient and intelligent AI systems.
The successful implementation of FeNN highlights the importance of tailoring hardware architectures to the specific requirements of emerging computational paradigms. By carefully considering the unique characteristics of SNNs, a platform has been created that outperforms traditional and dedicated neuromorphic approaches, underscoring the need for continued innovation in hardware design.
👉 More information
🗞 FeNN: A RISC-V vector processor for Spiking Neural Network acceleration
🧠 DOI: https://doi.org/10.48550/arXiv.2506.11760
