Skip to Main Content
This paper presents a method of visually steerable sound beam forming. The method is a combination of face detection and tracking by motion image processing and sound beam forming by speaker array. Direction towards a target person can be obtained by the face tracking in real-time. By continuously updating the sound beam direction with the result of the face detection and tracking, the system is possible to keep transmitting sounds towards the target person selectively, even if he or she moves around. Experimental results prove the feasibility and effectiveness of the method.