This website uses cookies primarily for visitor analytics. Certain pages will ask you to fill in contact details to receive additional information. On these pages you have the option of having the site log your details for future visits. Indicating you want the site to remember your details will place a cookie on your device. To view our full cookie policy, please click here. You can also view it at any time by going to our Contact Us page.

Robots learn to respond 'naturally' to human interaction

20 May 2013

Robots can learn to receive an object handed to them in something approaching natural human motion by following digitised human examples.

The method was developed by scientists at Disney Research, Pittsburgh, in a project partially funded by the International Centre for Advanced Communication Technologies (interACT) at Carnegie Mellon University and Karlsruhe Institute of Technology (KIT).

Recognising that a person is handing something and predicting where the human plans to make the hand-off is difficult for a robot, but the researchers from Disney and KIT solved the problem by using motion capture data with two people to create a database of human motion (watch the video).

By rapidly searching the database, a robot can realise what the human is doing and make a reasonable estimate of where he is likely to extend his hand.

People handing a coat, a package or a tool to a robot will become commonplace if robots are introduced to the workplace and the home, says Disney research scientist, Katsu Yamane. But the technique he developed with Marcel Revfi, an interACT exchange student from KIT, could apply to any number of situations where a robot needs to synchronise its motion with that of a human.

In the case of accepting a hand-off, it’s not just sufficient to develop a technique that enables the robot to efficiently find and grasp the object. “If a robot just sticks out its hand blindly, or uses motions that look more robotic than human, a person might feel uneasy working with that robot or might question whether it is up to the task,” Yamane explains. “We assume human-like motions are more user-friendly because they are familiar.”

Human-like motion is often achieved in robots by using motion capture data from people. But that’s usually done in tightly scripted situations, based on a single person’s movements. For the general scenarios envisioned by Yamane, a sampling of motion from at least two people would be necessary and the robot would have to access that database interactively, so it could adjust its motion as the person handing it a package progressively extended his or her arm.

To enable a robot to access a library of human-to-human passing motions with the speed necessary for robot-human interaction, the researchers developed a hierarchical data structure. Using principal component analysis, the researchers first developed a rough estimate of the distribution of various motion samples. They then grouped samples of similar poses and organised them into a binarytree structure.

With a series of 'either/or' decisions, the robot can rapidly search this database, so it can recognise when the person initiates a handing motion and then refine its response as the person follows through.

The team tested their method using computer simulations and, because it is essential to include a human in the loop, with the upper body of a humanoid robot. They confirmed that the robot began moving its arm before the human’s hand reached his desired passing location and that the robot’s hand position roughly matched that of the human receivers from the database that it was attempting to mimic.

Yamane said further work is necessary to expand the database for a wider variety of passing motions and passing distances. As more capable hardware becomes available, the researchers hope to add finger motions and secondary behaviours that would make the robot’s motion more engaging.


Print this page | E-mail this page

MinitecBritish Encoder Products