Skip to Main Content
We have developed a mobile assistive companion robot by combining a vision sensor and a laser range sensor to track and follow a target person. Although it works well in most cases, robot might lose target occasionally due to external factors such as bad view conditions or unconstructed environments. To solve this problem, we develop a speech system and sound source detection system to achieve sound source localization and speech interaction between users and robot. When robot gets lost during the tracking and following process, it will inform the user, and wait for a clapping sound from the user to re-localize user's location. The proposed method integrates human robot interaction based on speech system and sound source detection to retrieve target person's location when robot get lost, which is significantly different from other solutions which use motion model or Bayesian filters such as Kalman filters or particle filters to estimate user's location when robot is losing target. In this paper, we have demonstrated the success of the proposed method experimentally.