Skip to Main Content
Primates, in particular, humans are very adept at learning to use tools. In this talk, I will introduce a paradigm that utilizes this sensorimotor learning capacity to obtain robot behaviors, which otherwise would require manual programming by experts. The idea is to consider the target robotic platform as a tool that can be controlled by a human. Provided with an intuitive interface for controlling the robot, the human learns to perform a given task using the robot. This is akin to the stage where a beginner is learning to drive a car. After sufficient learning, the skilled control of the robot by the human provides learning data points that are used for constructing an autonomous controller so that the robot can perform the task without human guidance. I will demonstrate the feasibility of this framework by presenting several examples including a manipulation skill obtained for a robotic hand, and statically stable reaching skill obtained for a small humanoid robot. From an engineering point of view, this paradigm relies on techniques from teleoperation and machine learning, and has the same goals with robotic imitation and robot learning by demonstration. The key difference is that the proposed paradigm includes the human in the control loop and employs the human brain as the adaptive controller to accomplish a given task. Once the control proficiency has been attained by the human, then obtaining an autonomous controller boils down to reverse engineering the control policy established by the human brain. As time permits, I will present some ideas on the neural correlates of human-in-the-loop robot control and show how the interfaces built for robot skill synthesis can also be used in the reverse direction for probing motor control mechanisms employed by the central nervous system.