EgoWalk Dataset Enables Robust Real-World Robot Navigation Research and Development.

Researchers developed EgoWalk, a 50-hour dataset documenting human navigation across diverse indoor and outdoor environments and seasons. Accompanying this are automated pipelines generating subsidiary datasets for goal annotation and traversability segmentation, alongside full hardware specifications to facilitate further robotics research.

The development of robust autonomous navigation systems requires extensive training data reflecting the complexities of real-world environments. Current datasets often lack the scale and diversity needed to address unpredictable conditions encountered by robots operating outside controlled laboratory settings. To address this need, a collaborative team led by researchers from multiple institutions – including Timur Akhtyamov, Mohamad Al Mdfaa, Javier Antonio Ramirez, Sergey Bakulin, German Devchich, Denis Fatykhov, Alexander Mazurov, Kristina Zipa, Malik Mohrat, Pavel Kolesnik, Ivan Sosin and Gonzalo Ferrer – present EgoWalk: A Multimodal Dataset for Robot Navigation in the Wild. This new resource comprises 50 hours of human-recorded navigation data captured across a range of indoor and outdoor locations, seasons and environmental conditions, alongside automated pipelines for generating supplementary datasets useful for a variety of robotic tasks.

A new resource, EgoWalk, establishes a substantial dataset for research into robotic navigation, addressing a critical need for large-scale, real-world data to train and validate algorithms. Researchers collected 50 hours of human navigation data across diverse indoor and outdoor environments, encompassing varied seasonal conditions and geographical locations, to facilitate the development of more intelligent and adaptable robotic systems. Beyond raw data acquisition, the team developed automated pipelines to generate supplementary datasets for goal annotation and traversability segmentation, broadening the applicability of EgoWalk beyond core navigation tasks and enabling research in related areas.

These automated pipelines efficiently process the collected data, generating goal annotations – pinpointing desired destinations – and traversability segmentation masks, which identify passable areas for a robot. These supplementary datasets significantly enhance the value of EgoWalk, providing researchers with the tools necessary to train and evaluate algorithms capable of navigating complex environments with greater autonomy and reliability.

Experiments focused on three key areas: visual navigation, traversability segmentation, and language-guided goal annotation, each contributing to a comprehensive evaluation of the dataset’s utility.

For visual navigation, models were trained utilising an Nvidia GeForce RTX 3090 GPU for 70 epochs with a batch size of 50, employing the AdamW optimiser with a learning rate of 1e-4. Performance was benchmarked against existing methods, ViNT and NoMaD, with comparative predictions presented in Figure 19 of the full publication. These parameters were carefully selected to optimise training efficiency and model performance.

Traversability segmentation – identifying navigable space – involved training six distinct models: DeepLabV3+ (EfficientNetB1), FPN-SE-ResNet50, Segformer, MiT-B1, UNET++(ResNet50), and UNET(EfficientNetB1). Training was conducted on an Nvidia A100 GPU, utilising a batch size of 32 for 50 epochs with the Adam optimiser and a learning rate of 2e-4. The dataset was split 85.5% for training, 9.5% for validation, and 5% for testing, ensuring a robust and reliable evaluation of model performance. Figure 20 showcases the outputs of these models, allowing for direct visual comparison of segmentation performance and highlighting the strengths and weaknesses of each approach.

The study evaluated language-guided goal annotation through a human study, assessing both goal relevance – whether the chosen goal aligned with the observed trajectory – and caption quality, judging the informativeness and accuracy of generated descriptions within bounding boxes. Participants meticulously reviewed the annotations, providing valuable feedback that informed the development of more accurate and informative automated pipelines. Qualitative examples, detailed in Figure 21, categorise annotation outcomes as good, partially good, or bad, providing insight into the strengths and weaknesses of the annotation process and guiding future improvements.

Researchers plan to release EgoWalk publicly, fostering collaboration and accelerating progress in the field of robotic navigation. The dataset will provide a valuable resource for researchers and developers, enabling them to train and evaluate new algorithms, develop more robust and reliable robotic systems, and ultimately create robots that can navigate complex environments with greater autonomy and intelligence. The team anticipates that EgoWalk will serve as a catalyst for innovation, driving advancements in a wide range of applications, including autonomous vehicles, delivery robots, and assistive technologies.

👉 More information
🗞 EgoWalk: A Multimodal Dataset for Robot Navigation in the Wild
🧠 DOI: https://doi.org/10.48550/arXiv.2505.21282

Quantum News

Quantum News

As the Official Quantum Dog (or hound) by role is to dig out the latest nuggets of quantum goodness. There is so much happening right now in the field of technology, whether AI or the march of robots. But Quantum occupies a special space. Quite literally a special space. A Hilbert space infact, haha! Here I try to provide some of the news that might be considered breaking news in the Quantum Computing space.

Latest Posts by Quantum News:

Toyota & ORCA Achieve 80% Compute Time Reduction Using Quantum Reservoir Computing

Toyota & ORCA Achieve 80% Compute Time Reduction Using Quantum Reservoir Computing

January 14, 2026
GlobalFoundries Acquires Synopsys’ Processor IP to Accelerate Physical AI

GlobalFoundries Acquires Synopsys’ Processor IP to Accelerate Physical AI

January 14, 2026
Fujitsu & Toyota Systems Accelerate Automotive Design 20x with Quantum-Inspired AI

Fujitsu & Toyota Systems Accelerate Automotive Design 20x with Quantum-Inspired AI

January 14, 2026