Skip to Main Content
Presented in this paper is a data association method using audio and visual data which localizes targets in a cluttered environment and detects who is speaking to a robot. A particle filter is applied to efficiently select the optimal association between the target and the measurements. State variables are composed of target positions and speaking states. To update the speaking state, we first evaluate the incoming sound signal based on cross-correlation and then calculate a likelihood from the audio information. The visual measurement is used to find an optimal association between the target and the observed objects. The number of targets that the robot should interact with is updated from the existence probabilities and associations. Experimental data were collected beforehand and simulated on a computer to verify the performance of the proposed method applied to the speaker selection problem in a cluttered environment. The algorithm was also implemented in a robotic system to demonstrate reliable interactions between the robot and speaking targets.
Date of Publication: August 2009