Skip to Main Content
We aim at equipping the humanoid robot NAO with the capacity of performing expressive communicative gestures while telling a story. Given a set of intentions and emotions to convey, our system selects the corresponding gestures from a gestural database, called lexicon. Then it calculates the gestures to be expressive and plans their timing to be synchronized with speech. After that the gestures are instantiated as robot joint values and sent to the robot in order to execute the hand-arm movements. The robot has certain physical constraints to be addressed such as the limits of movement space and joint speed. This article presents our ongoing work on a gesture model generating co-verbal gestures for the robot while taking into account these constraints.