Vision-guided Optic Flow Navigation Achieves Robust Lunar Descent with 1% Error Using Sparse Flow Features

Navigating the lunar surface presents a significant challenge for increasingly common private missions, demanding robust autonomy within tight constraints on power and weight. Sean Cowan, Pietro Fanti, and Leon B. S. Williams, working at the Advanced Concepts Team within the European Space Research and Technology Centre (ESTEC), alongside Chit Hong Yam, Kaneyasu Asakuma, and Yuichiro Nada from ispace, inc., address this problem with a novel approach to onboard navigation. Their research introduces a motion-field inversion framework that accurately estimates a lander’s movement using visual flow and rangefinder data, offering a lightweight, CPU-based solution suitable for small lunar missions. By integrating classical flow formulations with depth modeling tailored for lunar terrain, the team achieves highly accurate velocity estimation, demonstrating sub-10% error even over challenging landscapes and paving the way for robust, real-time navigation on future lunar expeditions.

Lunar Vision Navigation and Hazard Avoidance

This research details advancements in vision-based navigation and hazard detection for lunar landing missions. Scientists developed a system that uses cameras and image processing to autonomously navigate to a landing site on the Moon, identify obstacles, and safely land a spacecraft. The work combines theoretical development, simulation, and considerations for flight-relevant applications, focusing on vision-based navigation, hazard detection and avoidance, image processing, and estimating a spacecraft’s motion from visual data. Researchers explored spherical depth maps, a method for representing the 3D structure of the lunar surface from camera images, and trajectory optimization, finding the best path for a spacecraft considering fuel consumption and safety.

Understanding how light interacts with the lunar surface, through bidirectional reflectance spectroscopy, proved crucial for improving image processing, and complementary sensors, such as Navigation Doppler Lidar, were considered to enhance system accuracy and robustness. This work demonstrates robust egomotion estimation and accurate hazard detection, even in challenging lunar environments. The team, comprised of aerospace engineers, AI specialists, chemical physicists, and aeronautics experts, successfully developed a comprehensive system for lunar landing missions, leveraging expertise in astrodynamics, artificial intelligence, machine learning, and spacecraft control.

Optical Flow and Laser Rangefinder Fusion

This study addresses the challenge of autonomous navigation for small lunar missions, focusing on a lightweight, CPU-based solution for estimating a spacecraft’s movement during descent. Scientists engineered a motion-field inversion framework that combines optical flow, derived from a monocular camera, with depth models informed by laser rangefinder data, avoiding the need for heavy and power-intensive systems like LiDAR. The core of the method involves extracting sparse flow features using the pyramidal Lucas-Kanade algorithm, which tracks image features to determine apparent motion across the image plane. To translate observed image motion into estimates of the spacecraft’s velocity and orientation, the team developed two depth models: a planar model and a spherical model, both utilizing data from a laser rangefinder to refine their accuracy.

The framework employs a least-squares optimization process to combine the optical flow data with the depth map, effectively inverting the motion field to calculate the spacecraft’s egomotion. Experiments utilized synthetically generated lunar images, specifically focusing on the challenging terrain of the lunar south pole, to validate the approach. Results demonstrate accurate velocity estimation throughout the approach and landing phases, achieving sub-10% error for complex terrain and approximately 1% error for more typical surfaces, paving the way for more accessible and cost-effective lunar exploration.

Precise Lunar Descent Velocity Estimation Achieved

Scientists achieved accurate velocity estimation during simulated lunar descent, demonstrating sub-10% error for complex terrain and approximately 1% error for more typical lunar surfaces. This breakthrough delivers a lightweight, CPU-based solution for autonomous navigation, addressing a critical challenge for small private lunar landers lacking the resources for complex systems. The research team developed a motion-field inversion framework that combines sparse Lucas-Kanade optical flow from a monocular camera with depth models parameterized by laser rangefinder data, enabling precise egomotion estimation. Experiments revealed that the framework accurately determines a spacecraft’s velocity during both orbital and terminal descent phases, utilizing a method that links observed image motion directly to the vehicle’s movement. Measurements confirm that the system effectively estimates velocity by analyzing the apparent motion of features in images captured by a camera, coupled with precise depth data obtained from a laser rangefinder. This achievement promises to enable robust, lightweight on-board navigation for future small lunar missions, overcoming limitations imposed by stringent constraints on mass, power, and computational resources.

Vision-Based Lunar Descent Velocity Estimation Achieved

This research demonstrates a vision-based method for estimating the movement of a spacecraft during lunar descent, offering a computationally efficient alternative to traditional approaches. By combining sparse optical flow analysis with simplified depth modeling, using both planar and spherical terrain approximations informed by rangefinder data, the team achieved accurate velocity estimates across a range of altitudes and lunar terrains. Results indicate sub-10% error for complex terrain and approximately 1% error for more typical lunar landscapes, all while operating within the limited processing power available on small lunar landers. The study establishes the practicality of a lightweight, vision-based navigation system, relying on onboard computation and potentially existing camera hardware, and offering a significant advantage over heavier, more power-intensive LiDAR systems. Future work could explore the integration of filtering mechanisms to reduce optical flow errors encountered during actual flight, and sensor fusion techniques to further refine attitude and angular velocity estimates, thereby enhancing overall stability and robustness. This framework represents a promising step towards enabling autonomous, reliable, and efficient navigation for future governmental and commercial lunar missions.

👉 More information
🗞 Vision-Guided Optic Flow Navigation for Small Lunar Missions
🧠 ArXiv: https://arxiv.org/abs/2511.17720

Rohail T.

Rohail T.

As a quantum scientist exploring the frontiers of physics and technology. My work focuses on uncovering how quantum mechanics, computing, and emerging technologies are transforming our understanding of reality. I share research-driven insights that make complex ideas in quantum science clear, engaging, and relevant to the modern world.

Latest Posts by Rohail T.:

Artificial Intelligence Advances Sustainable Materials Discovery, Addressing Post Synthesis Assessment Inefficiency

Artificial Intelligence Advances Sustainable Materials Discovery, Addressing Post Synthesis Assessment Inefficiency

February 3, 2026
Symbxrl Achieves 12% Enhanced Explainability for Deep Reinforcement Learning in 6G Networks

Symbxrl Achieves 12% Enhanced Explainability for Deep Reinforcement Learning in 6G Networks

February 3, 2026
Workflow Optimization Achieves 11.9% Gains in Agentic Task Efficiency

Workflow Optimization Achieves 11.9% Gains in Agentic Task Efficiency

February 3, 2026