I2e Enables 300x Faster Image-to-Event Conversion, Achieving 60.50% Accuracy for High-Performance Spiking Neural Networks

The limited availability of event-stream data currently restricts the potential of energy-efficient spiking neural networks. To address this challenge, Ruichen Ma, Liwei Meng, Guanchao Qiao, and colleagues develop I2E, a novel algorithmic framework that converts standard images into realistic event streams. This innovative approach overcomes a significant bottleneck by achieving conversion speeds over 300times faster than previous methods, allowing for effective data augmentation during SNN training. Demonstrating the framework’s power, the team achieves a state-of-the-art accuracy of 60. 50% on the I2E-ImageNet dataset and, crucially, establishes a robust sim-to-real pipeline where pre-training with synthetic data yields an unprecedented 92. 5% accuracy on real-world sensor data, paving the way for practical, high-performance spiking neural network systems.

Neuromorphic systems, promising highly energy-efficient computing, currently face a critical shortage of event-stream data, limiting their wider adoption. This work introduces I2E, an algorithmic framework designed to resolve this bottleneck by converting static images into high-fidelity event streams. By simulating microsaccadic eye movements with a highly parallelized convolution, I2E achieves a conversion speed over 300times faster than prior methods, uniquely enabling on-the-fly data augmentation for spiking neural network (SNN) training. The framework’s effectiveness is demonstrated on large-scale benchmarks, and an SNN trained on the generated I2E-ImageNet dataset achieves a state-of-the-art accuracy of 60. 50%. This work establishes a new approach to generating the data required for training advanced neuromorphic systems.

Image Stream Generation Boosts Spiking Neural Networks

Scientists have developed a new algorithmic framework, I2E, that addresses a critical limitation in spiking neural network (SNN) research: the scarcity of event-stream data. By efficiently converting static images into high-fidelity event streams, I2E overcomes a significant bottleneck hindering the development of these energy-efficient computing systems. The method achieves a conversion speed exceeding previous approaches by a factor of 300, enabling the practical application of on-the-fly data augmentation for SNN training. Researchers trained a deep spiking neural network on a newly generated dataset, I2E-ImageNet, achieving a state-of-the-art accuracy of 60.

50%, surpassing prior event-based ImageNet results by over 8%. Experiments reveal that models trained with I2E data benefit significantly from standard data augmentation techniques. A crucial finding is the establishment of a powerful “sim-to-real” paradigm, where pre-training on synthetic I2E data followed by fine-tuning on real-world CIFAR10-DVS data yields an unprecedented accuracy of 92. 5%, a remarkable 7. 7% improvement over previous best results.

This demonstrates that I2E-generated event data accurately mimics real sensor data, bridging a long-standing gap in neuromorphic engineering. Further analysis confirms the importance of I2E’s core components, with dynamic thresholding, random selection, and standard augmentations progressively improving performance. Experiments show that even with a reduced number of timesteps, the method maintains competitive accuracy while significantly increasing data compression. These results establish I2E as a foundational toolkit for developing high-performance SNNs and mitigating the data acquisition bottleneck that has long hindered progress in neuromorphic computing.

I2E Generates Event Data for SNN Training

This research introduces I2E, a novel algorithmic framework that addresses a critical limitation in the field of spiking neural networks: the scarcity of event-stream data. By efficiently converting static images into high-fidelity event streams, I2E overcomes a significant bottleneck hindering the development of these energy-efficient computing systems. The method achieves a conversion speed exceeding previous approaches by a factor of 300, enabling the practical application of on-the-fly data augmentation for SNN training. Researchers trained a deep spiking neural network on a newly generated dataset, I2E-ImageNet, achieving a state-of-the-art accuracy of 60. 50%. Crucially, the team established a powerful “sim-to-real” paradigm, successfully pre-training a model using.

👉 More information
🗞 I2E: Real-Time Image-to-Event Conversion for High-Performance Spiking Neural Networks
🧠 ArXiv: https://arxiv.org/abs/2511.08065
Rohail T.

Rohail T.

As a quantum scientist exploring the frontiers of physics and technology. My work focuses on uncovering how quantum mechanics, computing, and emerging technologies are transforming our understanding of reality. I share research-driven insights that make complex ideas in quantum science clear, engaging, and relevant to the modern world.

Latest Posts by Rohail T.:

Pentacene Dimers Boost Quantum Sensing Towards Single-Proton Detection

Pentacene Dimers Boost Quantum Sensing Towards Single-Proton Detection

April 2, 2026
Trapped Ions Reveal Subtle Forces with Unprecedented Measurement Accuracy

Trapped Ions Reveal Subtle Forces with Unprecedented Measurement Accuracy

April 2, 2026
Quantum Interference Creates Unexpected Patterns in Atomic Gas Dynamics

Quantum Interference Creates Unexpected Patterns in Atomic Gas Dynamics

April 2, 2026