Skip to Main Content
We describe a robotic system consisting of an arm and an active vision system learns to align its sensory and motor maps so that it can successfully reach the tip of its arm to touch the point where it is looking. This system uses an unsupervised Hebbian learning algorithm, and learns the alignment by watching its arm waving in front of its eyes. After watching for 25 minutes, the maps are sufficiently well aligned that it can execute the desired behavior.
Date of Conference: 26-28 Nov. 2009