Privacy-preserving Fall Detection Achieves 84% F1 Score on Loihi 2 Using Sony IMX636 and 55x Sparse Data

Fall detection systems offer a crucial safety net for an ageing population, yet current approaches often struggle to balance accuracy with the need for privacy and low power consumption. Lyes Khacef, Philipp Weidel, Susumu Hogyoku, and colleagues now demonstrate a significant advance in this field, developing a privacy-preserving fall detection system that operates directly on the vision sensor itself. The team integrates the Sony IMX636 event-based vision sensor with the Intel Loihi 2 processor, creating a system that efficiently processes visual information with minimal energy use. This innovative approach achieves a peak fall detection accuracy of 84%, while dramatically reducing computational demands, up to 55times fewer synaptic operations, and maintaining user privacy by avoiding data transmission to the cloud. The research highlights the potential of combining advanced sensing and processing technologies for real-world edge AI applications where speed, efficiency, and data security are paramount.

Detecting falls in elderly individuals using vision-based systems remains a significant challenge, particularly given the need for privacy and real-time performance with limited computing resources. Researchers are now addressing this problem with a new neuromorphic system that combines an event-based vision sensor with the Intel Loihi 2 processor, effectively utilizing the sparse nature of event data.

Event-Based Vision and Spiking Networks

Neuromorphic computing aims to build computer systems inspired by the brain, employing both novel hardware and algorithms. A key component is the Spiking Neural Network (SNN), which mimics biological neurons by using discrete events, or spikes, for communication, offering potential for energy efficiency. Complementing this is the event camera, a vision sensor that detects changes in brightness asynchronously, unlike traditional cameras that capture frames at fixed intervals. Event cameras provide advantages in speed, dynamic range, and power consumption. Researchers are also exploring State Space Models (SSMs), mathematical frameworks for modeling dynamic systems, and integrating them with SNNs to create more powerful and efficient models.

Event-Based Fall Detection with Neuromorphic Computing

This work introduces a novel fall detection system for elderly care, integrating an event-based vision sensor with the Intel Loihi 2 processor to achieve robust, real-time performance under strict hardware constraints. The team created a new dataset comprising 14 action classes, including falling and non-falling scenarios, recorded under diverse conditions with varying backgrounds, lighting, and camera distances, totaling approximately 3906 training samples, 3182 validation samples, and 1793 test samples. Experiments focused on binary classification, distinguishing between falls and non-fall events, using sequences lasting 4. 6 seconds, with only the most event-rich 2 seconds utilized for training and validation.

Initial experiments with a CNN+MLP model achieved an F1 score of 46. 3% at a computational cost of 433 million synaptic operations per second (M SynOps/s). Replacing ReLU activations with a SigmaDelta model slightly decreased the F1 score but reduced computational cost. Transitioning to a binary Leaky Integrate-and-Fire (LIF) model increased the F1 score to 53. 3% and further reduced SynOps/s.

The most efficient model, utilizing a graded LIF model, achieved an F1 score of 58. 1% with a remarkable 55. 5x reduction in SynOps/s, reaching only 26 M SynOps/s.

Further exploration involved integrating the S4D state space model, which decouples spatial and temporal feature extraction, significantly improving the F1 score to 65. 1% at a cost of 198 M SynOps/s. Replacing the CNN with the MCU13B architecture further enhanced the F1 score to 71. 8%, though at an increased computational cost of 1059 M SynOps/s. Utilizing patched inference allowed the model to fit onto a single Loihi 2 chip, with a slight degradation in F1 score. The most accurate model, MCU13B+S4D, and the most compute-efficient, graded LIF-based CNN+MLP, define the performance limits of this integrated sensing and processing approach for edge AI applications. Hardware evaluation on Loihi 2 confirmed a power consumption of 90mW for the highest performing model.

Neuromorphic Fall Detection With Graded Spikes

This research presents a fully embedded system integrating an event-based vision sensor with a neuromorphic processor, demonstrating a pathway towards low-latency, low-power fall detection for elderly care. By directly interfacing the sensor and processor, and representing visual information as graded spikes, the team successfully leveraged the inherent spatio-temporal sparsity of event data. Extensive exploration of neural network architectures and neuron models revealed that graded spikes within Leaky Integrate-and-Fire models significantly improve detection accuracy, achieving a 6. 2% gain in F1 score while reducing computational demands by nearly five times compared to binary spikes.

The team further demonstrated the deployment of a state-of-the-art spatial feature extractor on neuromorphic hardware, achieving a peak F1 score of 83. 6% with a power consumption of only 90mW. While static power currently represents the primary limitation of the system, the researchers acknowledge that future hardware iterations utilising more advanced manufacturing processes could address this. This work establishes a proof-of-concept for embedding dedicated processing directly at the edge of event-based vision systems, enabling always-on perception that filters raw sensory data and reduces the need for extensive cloud-based processing. Further research will focus on refining both hardware and algorithms to explore the potential of graded spikes and expand the design space of sparse neural networks beyond binary representations.

👉 More information
🗞 Privacy-preserving fall detection at the edge using Sony IMX636 event-based vision sensor and Intel Loihi 2 neuromorphic processor
🧠 ArXiv: https://arxiv.org/abs/2511.22554

Rohail T.

Rohail T.

As a quantum scientist exploring the frontiers of physics and technology. My work focuses on uncovering how quantum mechanics, computing, and emerging technologies are transforming our understanding of reality. I share research-driven insights that make complex ideas in quantum science clear, engaging, and relevant to the modern world.

Latest Posts by Rohail T.:

White Rabbit Time Synchronisation Achieves 99.86% Accuracy over 300km Optical Fibre Link

White Rabbit Time Synchronisation Achieves 99.86% Accuracy over 300km Optical Fibre Link

December 1, 2025
Boundary Time Crystal Enhances Quantum Sensing Beyond the Heisenberg Limit for Improved Phase Estimation

Boundary Time Crystal Enhances Quantum Sensing Beyond the Heisenberg Limit for Improved Phase Estimation

December 1, 2025
Dual Topology in -Sn Demonstrates Non-Zero Mirror Chern Number and Edge-States above Five Layers

Dual Topology in -Sn Demonstrates Non-Zero Mirror Chern Number and Edge-States above Five Layers

December 1, 2025