Skip to Main Content
This paper presents an integrated on-line operation system that enables a human user to operate humanoid robots by using natural language instructions. This paper has two major contributions. First, we present an integrated behavior system that is able to trigger behaviors according to speech commands, by recognizing objects, triggering actions and generating whole body motions on-line. Second, we present a situated natural language instruction system that is able not only to act according to speech commands, but also response to the direction of the sound source. A system that is able to understand natural language instructions and act accordingly will need the integration of knowledge representation, perception, decision making and on-line motion generation technologies. This paper tackles this integration problem by addressing the issues of representing knowledge of objects and actions which facilitates natural language instructions for tasks in indoor human environments. We propose a taxonomy of objects in indoor human environments and a lexicon of actions in this preliminary attempt to construct a reliable and flexible natural language instruction system. We report on the implementation of the proposed system on humanoid robot HRP-2, which is able to locate auditory sources and receive natural language instructions from a user within 2 meters using a 8-channel microphone array connected to a speech recognition embedded system on-board the robot.