Skip to Main Content
This article derives and simulates a neural-like network architecture that adaptively controls a visually guided, two-jointed robot arm to reach spot targets in three dimensions. The architecture learns and maintains visual-motor calibrations by itself, starting with only loosely defined relationships. The geometry of the architecture is composed of distributed, interleaved combinations of actuator inputs. It is fault tolerant and uses analog processing. Learning is achieved by modifying the distributions of input weights in the architecture after each arm positioning. Modifications of the weights are made incrementally according to errors of consistency between the actuator signals used to orient the cameras and those used to move the arm. Computer simulations show that errors in the intended arm acutator signals after learning are an average 4.3% of the signal range, across all possible targets.