Digital processors represent a compelling pathway towards energy-efficient, always-active artificial intelligence at the network edge. Researchers, including Amirreza Yousefzadeh from the University of Twente, present a comprehensive overview of building these processors, detailing the architectural principles behind fully digital systems. This work synthesises previous findings from the SENECA platform, offering a step-by-step guide for those seeking to design their own digital neuromorphic hardware. By starting with a foundation of flexible RISC-V cores and progressively adding specialised components, the team demonstrates how to overcome performance and energy limitations, ultimately paving the way for scalable and adaptable EdgeAI solutions.
Neuromorphic Processor Architecture and Acceleration Techniques
Researchers detail the development of SENECA, a digital processor designed for efficient, low-power EdgeAI applications, charting a systematic architectural evolution from a flexible base platform to versions incorporating specialized acceleration. They began with an array of RISC-V processing cores interconnected by a Network-on-Chip, or NoC, to create a highly adaptable neuromorphic processor, providing a foundation for progressive enhancements and the integration of dedicated Neural Processing Elements, or NPEs, and a loop controller to offload control functions. The team carefully selected the IBEX RISC-V core, recognizing its favorable area/performance trade-off and the availability of robust open-source resources, with each core incorporating RISC-V, the NoC, and both instruction and data memory. The NoC is central to the architecture, enabling efficient communication between cores by time-multiplexing connections, mirroring the plasticity of biological brains while operating within fixed electronic circuits.
Crucially, the NoC transmits data as packets containing the source neuron ID, known as Address Event Representation, or AER, ensuring correct decoding at the destination core. Observation of existing neuromorphic platforms, including NeuronFlow, Loihi, and SpiNNaker, led the researchers to conclude that the NoC rarely constitutes a performance bottleneck, likely due to the inherent sparsity of the algorithms and the high operation density for each spike. The team identified the RISC-V processing core as the primary limitation in both performance and energy consumption, confirming the effectiveness of the NoC design.
Brain-Inspired Processors Achieve Extreme Energy Efficiency
Scientists are developing digital processors designed to mimic the efficiency of the brain for use in low-power, always-on EdgeAI applications. This work synthesizes findings from the SENECA platform to provide a step-by-step architectural perspective for designing these processors. Researchers observe that a honey bee’s brain consumes less than 1 milliwatt of power while performing complex tasks, demonstrating the brain’s remarkable energy efficiency, inspiring the development of processors that leverage sparsity and event-driven computation. The team focuses on exploiting activation sparsity, processing only significant data to avoid unnecessary energy consumption, similar to the 1 to 10% neuron activity observed in biological brains.
Experiments reveal that event-driven processors, which only operate when data changes, can dramatically reduce power consumption, scaling it proportionally to the number of events received, a key feature of the developed architecture. Furthermore, co-locating memory and processing units, mirroring the brain’s structure, minimizes data movement, a major source of power consumption in conventional systems. Scientists demonstrate that integrating neuromorphic principles into digital processors offers significant advantages over purely analog designs, benefiting from compatibility with advanced technology nodes, allowing for smaller, faster, and more efficient transistors. This allows digital neuromorphic systems to fully utilize the latest manufacturing techniques, enhancing performance and reducing costs. Digital logic blocks are easily duplicated and reused, simplifying the creation of complex architectures and increasing reconfigurability, making these processors adaptable to a wide range of applications. While analog designs can achieve low power consumption, digital systems offer a compelling balance of performance, energy efficiency, and scalability.
Flexible EdgeAI Processors for Sparse Data
This work presents a systematic architectural exploration of digital processors designed for efficient, always-on EdgeAI applications. Researchers detail a progression from a basic array of RISC-V cores connected by a Network-on-Chip, demonstrating how incremental additions of dedicated Neural Processing Elements and loop control can enhance performance and energy efficiency. The study highlights software and mapping techniques, including spike grouping and event-driven convolution, which are crucial for processing sparse, event-based data commonly found in neuromorphic computing. By focusing on architectural trade-offs and performance bottlenecks, the team demonstrates the benefits of flexibility in processor design, allowing for domain-specific acceleration without requiring a complete overhaul of the underlying hardware.
The approach synthesizes findings from previous work on the SENECA platform, providing a coherent, step-by-step guide for those seeking to build their own digital processors for EdgeAI. While the study acknowledges that performance is heavily influenced by the specific implementation of the Network-on-Chip and the efficiency of data mapping, it establishes a valuable framework for future development in this rapidly evolving field. Future research directions could explore the optimization of these techniques for even greater energy savings and the application of these processors to more complex AI tasks.
👉 More information
🗞 From RISC-V Cores to Neuromorphic Arrays: A Tutorial on Building Scalable Digital Neuromorphic Processors
🧠 ArXiv: https://arxiv.org/abs/2512.00113
