By Topic

Visually steerable sound beam forming method possible to track target person by real-time visual face tracking and speaker array

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
K. Shinoda ; Dept. of Mech. Eng., Tokyo Univ. of Sci., Noda, Japan ; H. Mizoguchi ; S. Kagami ; K. Nagashima

This paper presents a method of visually steerable sound beam forming. The method is a combination of face detection and tracking by motion image processing and sound beam forming by speaker array. Direction towards a target person can be obtained by the face tracking in real-time. By continuously updating the sound beam direction with the result of the face detection and tracking, the system is possible to keep transmitting sounds towards the target person selectively, even if he or she moves around. Experimental results prove the feasibility and effectiveness of the method.

Published in:

Systems, Man and Cybernetics, 2003. IEEE International Conference on  (Volume:3 )

Date of Conference:

5-8 Oct. 2003