Skip to Main Content
Many tasks in the field of service robotics could benefit from a natural language interface that allows human users to talk to the robot as naturally as possible. However, so far we lack information about what would be natural to human users, as most experimental robotic systems involving natural language developed so far have not been systematically tested with human users unfamiliar with the system. In our simple scenario, human users refer to objects via their location rather than feature descriptions. Our robot uses a computational model of spatial reference to interpret the linguistic instructions. In experiments with naive users we test the adequacy of the model for achieving joint spatial reference. We show how our approach can be extended to more complex spatial tasks in natural human-robot interaction.