By Topic

Cheek to Chip: Dancing Robots and AI's Future

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

16 Author(s)

Recent generations of humanoid robots increasingly resemble humans in shape and articulatory capacities. This progress has motivated researchers to design dancing robots that can mimic the complexity and style of human choreographic dancing. Such complicated actions are usually programmed manually and ad hoc. However, this approach is both tedious and inflexible. Researchers at the University of Tokyo have developed the learning-from-observation (LFO) training method to overcome this difficulty.1-2 LFO enables a robot to acquire knowledge of what to do and how to do it from observing human demonstrations. Direct mapping from human joint angles to robot joint angles doesn't work well because of the dynamic and kinematic differences between the observed person and the robot (for example, weight, balance, and arm and leg lengths). LFO therefore relies on predesigned task models, which represent only the actions (and features thereof) that are essential to mimicry. Then it adapts these actions to the robot's morphology and dynamics so that it can mimic the movement. This indirect, two-step mapping is crucial for robust imitation and performance.

Published in:

IEEE Intelligent Systems  (Volume:23 ,  Issue: 2 )