Skip to Main Content
In a future scenario where many devices can be controlled using the voice, easy and intuitive access will be crucial for avoiding cognitive overload when users are faced with many different systems and interaction models. We propose a model for interaction with spoken language interfaces applied to heterogeneous tasks for service robots, based on the idea of using a family of lifelike characters. We argue that we can signal important features of the speech interface by using certain visual cues. The aim is to facilitate learning and transfer between interfaces. We discuss challenges for dialogue design affecting learnability in the light of the speech interface constructed for our full-scale robot prototype CERO.