Kennesaw State University researchers are poised to revolutionize robotic dexterity with a combined $340,000 in funding from the National Science Foundation and NVIDIA. Led by Robotics and Mechatronics Engineering professor Lingfeng Tao, the project aims to move beyond simple robotic grips towards multi-finger hands capable of complex manipulation – rotating objects, using tools, and adapting to physical interactions. “My research focuses on dexterous manipulation,” Tao explains, envisioning robotic hands “similar to the human hand, with multiple joints that can move independently and perform complex tasks.” This advancement, utilizing deep reinforcement learning, could unlock applications ranging from robotic surgery and space exploration to disaster response, ultimately creating robots that are “intelligent, safe, and truly helpful.”
Dexterous Manipulation Focuses on Human-Like Robotic Hand Control
Researchers are increasingly focused on bridging the gap between robotic capability and the nuanced dexterity of the human hand, with significant progress emerging from Kennesaw State University. Lingfeng Tao, a Robotics and Mechatronics Engineering professor, is spearheading work aimed at creating robotic hands with independently moving joints capable of complex tasks – a marked departure from simpler robotic grippers. This initiative is bolstered by $340,000 in combined funding from the National Science Foundation ($300,000) and an NVIDIA Academic Grant ($40,000), providing crucial resources for advanced AI hardware.
Tao’s approach moves beyond simply mirroring human movements onto robotic systems, acknowledging the critical role of physical interaction. “When humans manipulate objects, there are physical interactions happening,” he explains, noting that current remote control methods often force operators to proceed cautiously due to the robot’s lack of sensory feedback.
To overcome this, Tao employs deep reinforcement learning, simulating environments where virtual robots practice tasks en masse, learning from both successes and failures. “We collect all of those experiences,” Tao said, “The AI learns how to avoid failure and encourage successful behavior.” He draws a parallel to childhood learning, stating, “A kid already has basic abilities from playing with toys.” This allows for a balance of human intent and autonomous robotic action, potentially enabling remote operation in hazardous environments like the moon or deep sea, where “The human can still control it to do very subtle, dexterous tasks.” Applications span robotics surgery, disaster response, and manufacturing.
Deep Reinforcement Learning Trains Robots Through Simulated Environments
Deep reinforcement learning is emerging as a powerful technique for imbuing robots with the nuanced motor skills currently beyond their reach. Unlike systems reliant on mirroring human movements, Tao’s work centers on creating robotic awareness of physical interactions. “If you only map human motion to the robot, the robot cannot understand or feel those interactions.”
Tao’s innovation involves training robots within expansive simulated environments, allowing thousands of virtual iterations to occur concurrently. A $300,000 National Science Foundation (NSF) grant and a $40,000 NVIDIA Academic Grant are fueling this research, providing the necessary computational power. “They watch adults, learn how the tool is used, and then apply their own skills.”
The ultimate goal is to achieve a balance between human oversight and robotic autonomy, opening possibilities for remote operation in hazardous or inaccessible locations, such as space exploration or disaster response. “You could send a robot to the moon or to deep-sea environments,” Tao said.
$300,000 NSF & $40,000 NVIDIA Grants Support Research
A surge in funding is bolstering efforts to imbue robots with a more nuanced understanding of physical interaction, crucial for tasks demanding human-like dexterity. Tao’s work diverges from conventional remote-control systems that simply mirror human movements, a method he argues overlooks critical sensory feedback. The research team, based at the university’s Marietta Campus, is developing systems capable of both autonomous action and high-level human direction, with applications ranging from remote surgery to deep-sea exploration. SPCEET Dean Lawrence Whitman emphasized the significance of Tao’s work, stating, “Tao’s research unites artificial intelligence and robotics to change the way we live and work.”
“My research focuses on dexterous manipulation,” said Lingfeng Tao, a Robotics and Mechatronics Engineering professor in Kennesaw State University’s Southern Polytechnic College of Engineering and Engineering Technology . “I want to control robot hands that are similar to the human hand, with multiple joints that can move independently and perform complex tasks.”
Lingfeng Tao
