Neuromorphic Spiking Neural Network Mimics Brain, Boosts AI Efficiency: 81.4% Accuracy Achieved

The Neuromorphic Spiking Neural Network (SNN) is an artificial neural network that mimics the human brain’s operation, offering a more powerful and efficient approach to artificial intelligence. The SNN uses analog spikes for signal representation and is well-suited for the computing-in-memory (CIM) technique, which reduces the need for massive data transfer between processing units and memory. A test chip implementing the SNN achieved an average inference latency of 196 ns and an inference accuracy of 81.4%, consuming 242 µW with an energy efficiency of 474 pJ/inference/neuron. The research was conducted by ChaoYu Chen, YanSiou Dai, and HaoChiao Hong.

What is the Neuromorphic Spiking Neural Network?

The Neuromorphic Spiking Neural Network (SNN) is a type of artificial neural network (ANN) that mimics the operation of the human brain, which is considered the most powerful and energy-efficient computer in the world. This makes the SNN a promising approach to creating a more powerful and efficient ANN. The SNN has two main characteristics. First, the signals are represented by analog spikes instead of digital values. This means that a single wire can transmit such an analog spiking signal with unlimited resolution in theory. Additionally, a simple digital buffer can be used to drive a heavy load without significant power consumption.

The SNN is inherently analog, making it well-suited to the implementation of the emerging computing-in-memory (CIM) technique. This technique addresses the shortcomings of traditional ANNs, which require a large amount of multiply-and-accumulate (MAC) calculations in both the training and inference phases. High-performance general-purpose GPUs and AI accelerators are commonly used to meet this computational demand and optimize operational efficiency. However, these accelerators can suffer from the von Neumann bottleneck because their target applications require massive data transfer between the processing units (PUs) and the memory via the bandwidth-limited memory bus. This not only restricts the throughput but is also power-consuming.

How Does the Computing-in-Memory Technique Work?

The CIM technique performs the MAC operations directly upon reading the synaptic weights from the memory. This addresses the shortcomings of traditional ANNs by reducing the need for massive data transfer between the PUs and the memory. The CIM circuit designs usually convert the ANNs digital inputs into analog signals to control the read wordlines (RWLs) of the memory cells. The responses of these cells are added in an analog way on the read bitlines (RBLs) to produce the analog MAC results. Each RBL then uses an analog-to-digital converter (ADC) to convert the analog MAC results back to the digital ones prior to the post-digital signal processing.

However, this process requires a considerable number of digital-to-analog converters (DACs) and ADCs, which may lower the performance improvement potential of the CIM technique. To address this, a design combining sensors with a CIM macro has been proposed to demonstrate pure analog CIM (ACIM) circuits without using any data converters. This ACIM design has been shown to be an effective approach for implementing highly efficient ANNs.

What is the Time-to-First-Spike Coding Scheme?

The Time-to-First-Spike (TTFS) coding scheme is a method used in the SNN to process analog spiking signals. This article demonstrates the first functional neuromorphic SNN that uses the TTFS coding scheme and the second-order leaky integrate-and-fire (SOLIF) neuron model to achieve superior biological plausibility. An 8kb SRAM macro is used to implement the synapses of the neurons to enable ACIM operation and produce current-type dendrite signals of the neurons.

A novel low-leakage 8T (LL8T) SRAM cell is proposed for implementing the SRAM macro to reduce the read leakage currents on the RBLs when performing ACIM. Each neuron’s soma is implemented with low-power analog circuits to realize the SOLIF model for processing the dendrite signals and generating the final analog output spikes. No data converters are required in this design due to the nature of analog computing.

How Effective is the Neuromorphic Spiking Neural Network?

A test chip implementing the complete output layer of the proposed SNN was fabricated in 90nm CMOS. The active area is 553 x 41186µm2. The measurement results show that the SNN implementation achieves an average inference latency of 196 ns and an inference accuracy of 81.4%. It consumes 242 µW with an energy efficiency of 474 pJ/inference/neuron.

These results demonstrate the effectiveness of the SNN and its potential for implementing highly efficient ANNs. The use of the TTFS coding scheme and the SOLIF neuron model, along with the ACIM technique and the novel LL8T SRAM cell, contribute to the superior performance and energy efficiency of the SNN.

Who are the Key Contributors to this Research?

This research was conducted by ChaoYu Chen, a Graduate Student Member of IEEE, YanSiou Dai, and HaoChiao Hong, a Senior Member of IEEE. ChaoYu Chen and HaoChiao Hong are affiliated with the Institute of Electrical and Computer Engineering, National Yang Ming Chiao Tung University, Hsinchu, Taiwan. YanSiou Dai was previously affiliated with the same institute but is now with the Technology Innovation Department, Novatech Microelectronics Corporation, Hsinchu, Taiwan. The research was supported in part by the National Science and Technology Council, Taiwan, and the Ministry of Economic Affairs, Taiwan.

Publication details: “A Neuromorphic Spiking Neural Network Using Time-to-First-Spike Coding Scheme and Analog Computing in Low-Leakage 8T SRAM”
Publication Date: 2024-01-01
Authors: C.C. Chen, Ying‐Xiu Dai and Hao-Chiao Hong
Source: IEEE Transactions on Very Large Scale Integration Systems
DOI: https://doi.org/10.1109/tvlsi.2024.3368849

Quantum News

Quantum News

As the Official Quantum Dog (or hound) by role is to dig out the latest nuggets of quantum goodness. There is so much happening right now in the field of technology, whether AI or the march of robots. But Quantum occupies a special space. Quite literally a special space. A Hilbert space infact, haha! Here I try to provide some of the news that might be considered breaking news in the Quantum Computing space.

Latest Posts by Quantum News:

IBM Remembers Lou Gerstner, CEO Who Reshaped Company in the 1990s

IBM Remembers Lou Gerstner, CEO Who Reshaped Company in the 1990s

December 29, 2025
Optical Tweezers Scale to 6,100 Qubits with 99.99% Imaging Survival

Optical Tweezers Scale to 6,100 Qubits with 99.99% Imaging Survival

December 28, 2025
Rosatom & Moscow State University Develop 72-Qubit Quantum Computer Prototype

Rosatom & Moscow State University Develop 72-Qubit Quantum Computer Prototype

December 27, 2025