Skip to Main Content
Recently, many researchers have tried to develop an interactive agent or robot that can communicate with humans smoothly and naturally. However, though many of these are in the media spotlight, none of them can achieve actual smooth communication with humans in everyday situations. We consider that one of the reasons is that these agents are not developed considering the human cognitive features that allow humans to adapt smoothly to their interaction partners. In our research, we focus on a speech interface system as an interactive agent. In addition, we propose a meaning acquisition model constructed as a basic technology for achieving the desired speech interface system. The model is based on cognitive features human use to communicate, and it can recognize a user's intention as a result of the interaction between the model and user. Our results confirmed that the constructed model could recognize the intentions/meanings of normal verbal instructions as a result of interaction with users and without our having to prepare a priori knowledge about the instructions as is necessary with a conventional speech interfaces. We expect that this result could contribute to achieving an interactive interface system and provide insights to those researchers studying human-agent interaction.