Event Cameras Enable Microsaccade Recognition with Dataset Simulating 0.5 to 2.0 Degree Angular Displacements

Understanding the tiny, involuntary movements of the human eye, known as microsaccades, is crucial for deciphering how we perceive the world and how our brains process visual information. Waseem Shariff, Timothy Hanley, and Maciej Stec, all from the University of Galway, alongside Hossein Javidnia from Trinity College Dublin and Peter Corcoran from the University of Galway, have created a novel dataset and evaluation framework to advance research in this area. Their work addresses limitations in traditional microsaccade studies by utilising event-based cameras, which capture motion with exceptional speed and efficiency. The team successfully demonstrates that spiking neural networks, specifically a refined version called Spiking-VGG16Flow, can accurately classify these subtle eye movements by their angular displacement, achieving around 90 percent accuracy, and establishes a new benchmark for event-based vision research that promises to unlock deeper insights into visual perception and cognitive processes.

Event Cameras Capture Subtle Eye Movements

This research investigates the potential of event cameras, bio-inspired vision sensors that detect changes in brightness, for capturing high-resolution eye movements, including subtle microsaccades. Event cameras offer advantages like high temporal resolution, low latency, and reduced data volume, crucial for real-time applications and low-power devices. This technology holds potential in areas like driver monitoring, human-computer interaction, neurological research, and virtual/augmented reality. The ability to capture high-resolution eye movements with low latency and reduced data volume could lead to significant advancements in these fields, particularly by allowing for more accurate analysis of these subtle motions. The focus on microsaccades is particularly interesting, as these movements are often overlooked but may provide valuable insights into visual perception and cognitive processes.

Simulating Microsaccades with Event-Based Rendering

Scientists developed a novel event-based dataset to investigate the dynamics of microsaccades, the small, involuntary eye movements crucial for visual perception. Recognizing the limitations of traditional eye-tracking methods, the team pioneered a method using Blender to render high-fidelity eye movement scenarios, accurately modeling microsaccades with angular displacements ranging from 0. 5 to 2. 0 degrees. These rendered sequences were converted into event streams, preserving the natural temporal characteristics of these rapid movements.

This innovative approach enabled the researchers to create a dataset with exceptionally high temporal resolution, prompting an investigation into the capabilities of Spiking Neural Networks (SNNs) for detecting and classifying these subtle motions. Experiments demonstrated that SNNs achieve approximately 90 percent average accuracy in classifying microsaccades by angular displacement, independent of event count or duration, highlighting the potential of SNNs for fine motion recognition. The dataset, alongside the code and trained models, is publicly available to facilitate further investigation into neuromorphic computing and visual neuroscience.

Synthetic Microsaccade Dataset for Event-Based Vision

Scientists have created a pioneering dataset of synthetic microsaccades to advance research in event-based vision and neuromorphic computing. The work addresses limitations in current microsaccade analysis, which often relies on costly, high-resolution eye trackers and frame-based methods lacking temporal precision. This new dataset comprises 175,000 annotated event stream samples, simulating both left and right eye microsaccades across seven distinct angular amplitudes ranging from 0. 5 to 2. 0 degrees.

Experiments demonstrate that Spiking Neural Networks (SNNs) can effectively classify these subtle eye movements based on angular displacement, achieving approximately 90 percent average accuracy. Crucially, the models accurately classify microsaccades irrespective of event count or duration, highlighting their robustness and potential for real-time applications. The dataset and trained models will be publicly available, establishing a benchmark for future research in event-based vision and providing a valuable resource for the neuromorphic computing community.

Realistic Microsaccades Dataset for Spiking Networks

This research presents a novel, synthetic dataset designed to advance the study of microsaccades, the small, involuntary movements of the eye crucial for visual processing. Scientists developed realistic eye movement simulations using advanced rendering techniques, then converted these into event streams. The team successfully trained several spiking neural network models, demonstrating their ability to accurately classify microsaccades by angular displacement. Results indicate that these networks learn from the spatiotemporal patterns of eye movements, rather than simply relying on the quantity of visual information.

To assess real-world applicability, the trained model was tested on existing event-camera data, successfully detecting microsaccade-like activity during periods of visual fixation. Future work will focus on creating a dedicated event-based dataset collected under more demanding conditions, alongside exploring different data processing strategies to further refine performance. This research establishes a valuable benchmark for event-based vision research and paves the way for more sophisticated understanding of eye movements and visual perception.

👉 More information
🗞 Benchmarking Microsaccade Recognition with Event Cameras: A Novel Dataset and Evaluation
🧠 ArXiv: https://arxiv.org/abs/2510.24231

Rohail T.

Rohail T.

As a quantum scientist exploring the frontiers of physics and technology. My work focuses on uncovering how quantum mechanics, computing, and emerging technologies are transforming our understanding of reality. I share research-driven insights that make complex ideas in quantum science clear, engaging, and relevant to the modern world.

Latest Posts by Rohail T.:

Quantum Technology Detects Non-Gaussian Entanglement, Escaping Limitations of Covariance-Based Criteria

Quantum Technology Detects Non-Gaussian Entanglement, Escaping Limitations of Covariance-Based Criteria

December 24, 2025
5G Networks Benefit from 24% Reconfigurable Beamforming with Liquid Antenna

5G Networks Benefit from 24% Reconfigurable Beamforming with Liquid Antenna

December 24, 2025
Quantum-resistant Cybersecurity Advances Protection Against Shor and Grover Algorithm Threats

Quantum-resistant Cybersecurity Advances Protection Against Shor and Grover Algorithm Threats

December 24, 2025