Loading [MathJax]/extensions/MathMenu.js
Interactive Task Learning for Social Robots: A Pilot Study | IEEE Conference Publication | IEEE Xplore

Interactive Task Learning for Social Robots: A Pilot Study


Abstract:

For socially assistive robots to achieve widespread adoption, the ability to learn new tasks in the wild is critical. Learning from Demonstration (LfD) approaches are a p...Show More

Abstract:

For socially assistive robots to achieve widespread adoption, the ability to learn new tasks in the wild is critical. Learning from Demonstration (LfD) approaches are a popular method for learning in the wild, but current methods require significant amounts of data and can be difficult to interpret. Interactive Task Learning (ITL) is an emerging learning paradigm that aims to teach tasks in a structured manner, minimizing the need for data and increasing transparency. However, to date ITL has only been explored for physical robotics applications. Additionally, minimal research has explored how usable existing ITL systems are for non-expert users. In this work, we propose a novel approach to learn social tasks via ITL. This system utilizes recent advances in Natural Language Understanding (NLU) to learn from natural dialogue. We conducted a pilot study to compare the ITL system against an LfD approach to investigate differences in teaching performance as well as teachers' perceptions of trust and workload towards these systems. Additionally, we analyzed the teaching behavior of participants to identify successful and unsuccessful teaching strategies. Our findings suggest ITL could provide more transparency to users and improve performance by correcting speech recognition errors. However, participants generally preferred LfD and found it an easier teaching method. From the observed teaching behavior, we identify existing challenges in ITL for non-experts to teach social tasks. Using this, we propose areas of improvement toward future ITL learning paradigms that are intuitive, transparent, and performant.
Date of Conference: 01-05 October 2023
Date Added to IEEE Xplore: 13 December 2023
ISBN Information:

ISSN Information:

Conference Location: Detroit, MI, USA

Funding Agency:


I. Introduction

Socially assistive robots (SARs) have tremendous potential to improve our society, yet in order to do so these robots require a means of learning how to interact with humans in different tasks and settings. Given the infeasibility of designing a fully general robot, it is imperative that non-expert users can teach and adapt SARs in the wild. A popular approach for this is learning from demonstration (LfD), where a human demonstrates a task and the robot forms a model that is used to execute the task independently. LfD has shown promising results in physical domains such as object manipulation as well as social domains such as therapy for Autism Spectrum Disorder [1] and group activities for older adults [2]. However, it can be difficult to teach tasks to SARs because while these robots may look human they do not have human-level cognition. Teachers may overestimate the reasoning or common-sense knowledge of the robot based on its humanoid appearance. This is referred to as the perceptual belief problem [3]. It can significantly impair LfD because a teacher cannot be effective without understanding what concepts the robot already knows and what it needs to learn. For SARs to achieve greater autonomy, they must be able to rapidly acquire new concepts and convey the extent of their knowledge to their teachers.

Contact IEEE to Subscribe

References

References is not available for this document.