I. Introduction
Socially assistive robots (SARs) have tremendous potential to improve our society, yet in order to do so these robots require a means of learning how to interact with humans in different tasks and settings. Given the infeasibility of designing a fully general robot, it is imperative that non-expert users can teach and adapt SARs in the wild. A popular approach for this is learning from demonstration (LfD), where a human demonstrates a task and the robot forms a model that is used to execute the task independently. LfD has shown promising results in physical domains such as object manipulation as well as social domains such as therapy for Autism Spectrum Disorder [1] and group activities for older adults [2]. However, it can be difficult to teach tasks to SARs because while these robots may look human they do not have human-level cognition. Teachers may overestimate the reasoning or common-sense knowledge of the robot based on its humanoid appearance. This is referred to as the perceptual belief problem [3]. It can significantly impair LfD because a teacher cannot be effective without understanding what concepts the robot already knows and what it needs to learn. For SARs to achieve greater autonomy, they must be able to rapidly acquire new concepts and convey the extent of their knowledge to their teachers.