Robots Learn Complex Tasks with New AI Systems Advances

Researchers at Google’s DeepMind have made significant advances in robot dexterity, enabling robots to learn complex tasks that require precise movement. Two new AI systems, ALOHA Unleashed and DemoStart, are paving the way for robots to perform a wide variety of helpful tasks. ALOHA Unleashed helps robots learn to perform novel two-armed manipulation tasks, such as tying shoelaces and repairing another robot.

DemoStart uses simulations to improve real-world performance on a multi-fingered robotic hand, achieving a success rate of over 98% on various tasks in simulation. The systems were developed using MuJoCo, an open-source physics simulator, and tested on a three-fingered robotic hand called DEX-EE, developed in collaboration with Shadow Robot. These advances bring us closer to a future where AI robots can assist people with daily tasks at home and in the workplace.

Advances in Robot Dexterity: Enabling Robots to Perform Complex Tasks

Robotics research has made significant strides in recent years, with a focus on developing robots that can perform complex tasks requiring dexterous movement. Two new artificial intelligence (AI) systems, ALOHA Unleashed and DemoStart, have been introduced to help robots learn to perform novel two-armed manipulation tasks and improve real-world performance on a multi-fingered robotic hand.

Improving Imitation Learning with Two Robotic Arms

ALOHA Unleashed is a new method that achieves a high level of dexterity in bi-arm manipulation. This system builds upon the ALOHA 2 platform, which was based on the original ALOHA (a low-cost open-source hardware system for bimanual teleoperation) from Stanford University. The ALOHA 2 platform is significantly more dexterous than prior systems because it has two hands that can be easily teleoperated for training and data collection purposes, allowing robots to learn how to perform new tasks with fewer demonstrations.

The ALOHA Unleashed method involves collecting demonstration data by remotely operating the robot’s behavior, performing difficult tasks like tying shoelaces and hanging t-shirts. Next, a diffusion method is applied, predicting robot actions from random noise, similar to how the Imagen model generates images. This helps the robot learn from the data, so it can perform the same tasks on its own.

Learning Robotic Behaviors from Few Simulated Demonstrations

DemoStart is another new system that uses a reinforcement learning algorithm to help robots acquire dexterous behaviors in simulation. These learned behaviors are especially useful for complex embodiments, like multi-fingered hands. DemoStart first learns from easy states and over time starts learning from more difficult states until it masters a task to the best of its ability.

The robot achieved a success rate of over 98% on a number of different tasks in simulation, including reorienting cubes with a certain color showing, tightening a nut and bolt, and tidying up tools. In the real-world setup, it achieved a 97% success rate on cube reorientation and lifting, and 64% at a plug-socket insertion task that required high-finger coordination and precision.

The Future of Robot Dexterity

Robotics is a unique area of AI research that shows how well approaches work in the real world. For example, a large language model could tell you how to tighten a bolt or tie your shoes, but even if it was embodied in a robot, it wouldn’t be able to perform those tasks itself.

One day, AI robots will help people with all kinds of tasks at home, in the workplace, and more. Dexterity research, including the efficient and general learning approaches described above, will help make that future possible. While there is still much work to be done before robots can grasp and handle objects with the ease and precision of humans, significant progress has been made, and each groundbreaking innovation is another step in the right direction.

The Role of Simulation in Robot Learning

Robotic learning in simulation can reduce the cost and time needed to run actual physical experiments. However, designing these simulations is difficult, and they don’t always translate successfully back into real-world performance. By combining reinforcement learning with learning from a few demonstrations, DemoStart’s progressive learning automatically generates a curriculum that bridges the sim-to-real gap, making it easier to transfer knowledge from a simulation into a physical robot.

The DEX-EE dexterous robotic hand, developed by Shadow Robot in collaboration with the Google DeepMind robotics team, is an example of how this approach can be used to enable more advanced robot learning through intensive experimentation.

More information
External Link: Click Here For More
Quantum News

Quantum News

As the Official Quantum Dog (or hound) by role is to dig out the latest nuggets of quantum goodness. There is so much happening right now in the field of technology, whether AI or the march of robots. But Quantum occupies a special space. Quite literally a special space. A Hilbert space infact, haha! Here I try to provide some of the news that might be considered breaking news in the Quantum Computing space.

Latest Posts by Quantum News:

IBM Remembers Lou Gerstner, CEO Who Reshaped Company in the 1990s

IBM Remembers Lou Gerstner, CEO Who Reshaped Company in the 1990s

December 29, 2025
Optical Tweezers Scale to 6,100 Qubits with 99.99% Imaging Survival

Optical Tweezers Scale to 6,100 Qubits with 99.99% Imaging Survival

December 28, 2025
Rosatom & Moscow State University Develop 72-Qubit Quantum Computer Prototype

Rosatom & Moscow State University Develop 72-Qubit Quantum Computer Prototype

December 27, 2025