Skip to Main Content
Nonverbal behavior during human-human close encounters is critical to the accomplishment of natural interaction. For this reason, humanoid robots trying to achieve natural interactions with humans should be able to understand and synthesis nonverbal behavior in a way that mimics the human use of it. One of the most important situations during natural human-robot interactions is the explanation scenario in which the human is explaining a task to the robot using natural verbal and nonverbal behavior. This situation occurs frequently in many HRI applications and is critical to the success of the robots as knowledge media project suggested by the authors. In this paper the implementation of a humanoid robot that can show human like gaze control during explanation settings based only on reactive processing is presented. The software of the robot is based on the EICA architecture designed to combine autonomy with interactivity in the lowest level of the system. The details of the implementation and analysis of the naturalness of behavior and the effect of noisy input is presented in this paper.