Skip to Main Content
Recently, in the field of HRI, multimodal expression has been an issue. Synchronizing modalities and determining what modality to use are important aspect of multimodal expression. For example, when robots express emotional states, they may use only facial expressions or facial expressions with gestures, neck motions, sounds, etc. In this paper, emotional boundaries are proposed for multimodal expression in a three-dimensional affect space. The simultaneous expression of facial expression and gestures was demonstrated using proposed emotional boundaries on a simulator.