By Topic

Towards multi-state visuo-spatial reasoning based proactive human-robot interaction

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Pandey, A.K. ; LAAS, Toulouse, France ; Ali, M. ; Warnier, M. ; Alami, R.

Robots are expected to co-operate with humans in day-to-day interaction. One aspect of such co-operation is behaving proactively. In this paper, our robot will exploit the visuo-spatial perspective-taking of the human partner not only from his current state but also from a set of different states he might attain from his current state. Such rich information will help the robot in better predicting `where' the human can perform a particular task and how the robot could support it. We have tested the system on two different robots for the tasks of giving and making an object accessible to the robot by the human partner. Our robots equipped with such multi-state visuo-spatial perspective-taking capabilities show different proactive behaviors depending upon the task and situation, such as reach out proactively and to a correct place, when human has to give an object to the robot. Primary results of user studies show that such proactive behaviors reduce the human's `confusion' as well as `the robot' seems to be more `aware' about the task and the human.

Published in:

Advanced Robotics (ICAR), 2011 15th International Conference on

Date of Conference:

20-23 June 2011