Skip to Main Content
When a robot interacts with an individual, it is important to know with whom it is interacting, either to avoid social faux pas or remember user preferences. Continuous person identification during normal interactions, however, is extremely challenging. A person is periodically speaking to the robot, while at the same time changing pose, looking in other directions, etc. In this paper, we address the problem of continuous person identification using both speech and face recognition. We demonstrate that both modalities can together produce a system that is superior at person identification than from using a single modality alone.