By Topic

Human-robot interaction through real-time auditory and visual multiple-talker tracking

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

5 Author(s)
Okuno, H.G. ; ERATO, Japan Sci. & Technol. Corp, Tokyo, Japan ; Nakadai, K. ; Hidai, K. ; Mizoguchi, H.
more authors

Nakadai et al. (2001) have developed a real-time auditory and visual multiple-talker tracking technique. In this paper, this technique is applied to human-robot interaction including a receptionist robot and a companion robot at a party. The system includes face identification, speech recognition, focus-of-attention control, and sensorimotor task in tracking multiple talkers. The system is implemented on a upper-torso humanoid and the talker tracking is attained by distributed processing on three nodes connected by 100Base-TX network. The delay of tracking is 200 msec. Focus-of-attention is controlled by associating auditory and visual streams by using the sound source direction and talker position as a clue. Once an association is established, the humanoid keeps its face to the direction of the associated talker

Published in:

Intelligent Robots and Systems, 2001. Proceedings. 2001 IEEE/RSJ International Conference on  (Volume:3 )

Date of Conference:

2001