This website uses cookies primarily for visitor analytics. Certain pages will ask you to fill in contact details to receive additional information. On these pages you have the option of having the site log your details for future visits. Indicating you want the site to remember your details will place a cookie on your device. To view our full cookie policy, please click here. You can also view it at any time by going to our Contact Us page.

Machine perception lab shows robotic one-year-old on video

10 January 2013

A humanoid robot that mimics the expressions of a one-year-old child will be used in studies on sensory-motor and social development. The face alone has 27 moving parts to create expressions.

Different faces of Diego-san: video of robo-toddler shows him demonstrating different facial expressions, using 27 moving parts in the head alone

The robot, dubbed 'Diego-san' features hardware developed by leading robot manufacturers: the head by Hanson Robotics, and the body by Japan’s Kokoro Co. The project is led by research scientist Javier Movellan of the University of California at San Diego (UCSD), who directs the Institute for Neural Computation's Machine Perception Laboratory, based in the UCSD division of the California Institute for Telecommunications and Information Technology.

The Diego-san project is also a joint collaboration with the Early Play and Development Laboratory of professor Dan Messinger at the University of Miami, and with professor Emo Todorov's Movement Control Laboratory at the University of Washington.

Movellan and his colleagues are developing the software that allows Diego-san to learn to control his body and to learn to interact with people. 

"We've made good progress developing new algorithms for motor control, and they have been presented at robotics conferences, but generally on the motor-control side, we really appreciate the difficulties faced by the human brain when controlling the human body," said Movellan, reporting even more progress on the social-interaction side.

"We developed machine-learning methods to analyse face-to-face interaction between mothers and infants, to extract the underlying social controller used by infants, and to port it to Diego-san. We then analysed the resulting interaction between Diego-san and adults." Full details and results of that research are being submitted for publication in a top scientific journal.

While photos and videos of the robot have been presented at scientific conferences in robotics and in infant development, the general public is getting a first peak at Diego-san’s expressive face in action (see video).

“This robotic baby boy was built with funding from the National Science Foundation and serves cognitive [artificial intelligence] AI and human-robot interaction research,” wrote Hanson. “With high definition cameras in the eyes, Diego San sees people, gestures, expressions, and uses AI modelled on human babies, to learn from people, the way that a baby hypothetically would. The facial expressions are important to establish a relationship, and communicate intuitively with people.”

Diego-san is actually much larger than a standard one year old – mainly because miniaturising the parts would have been too costly. It stands about 130cm tall and weighs 30kg, and its body has a total of 44 pneumatic joints. Its head alone contains about 27 moving parts.

The robot’s sensors and actuators were built to approximate the levels of complexity of human infants, including actuators to replicate dynamics similar to those of human muscles. The technology should allow Diego-san to learn and autonomously develop sensory-motor and communicative skills typical of one-year-old infants.


Print this page | E-mail this page