Skip to Main Content
It has been shown that people can learn to perform a variety of motor tasks in novel dynamic environments without visual feedback, highlighting the importance of proprioceptive feedback in motor learning. However, our results show that it is possible to learn a viscous curl force field without proprioceptive error to drive adaptation, by providing visual information about the position error. Subjects performed reaching movements in a constraining channel created by a robotic interface. The force that subjects applied against the haptic channel was used to predict the unconstrained hand trajectory under a viscous curl force field. This trajectory was provided as visual feedback to the subjects during movement (virtual dynamics). Subjects were able to use this visual information (discrepant with proprioception) and gradually learned to compensate for the virtual dynamics. Unconstrained catch trials, performed without the haptic channel after learning the virtual dynamics, exhibited similar trajectories to those of subjects who learned to move in the force field in the unconstrained environment. Our results demonstrate that the internal model of the external dynamics that was formed through learning without proprioceptive error was accurate enough to allow compensation for the force field in the unconstrained environment. They suggest a method to overcome limitations in learning resulting from mechanical constraints of robotic trainers by providing suitable visual feedback, potentially enabling efficient physical training and rehabilitation using simple robotic devices with few degrees-of-freedom.