By Topic

Adaptive visual-motor coordination in multijoint robots using parallel architecture

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
Kuperstein, M. ; Wellesley College

This article derives and simulates a neural-like network architecture that adaptively controls a visually guided, two-jointed robot arm to reach spot targets in three dimensions. The architecture learns and maintains visual-motor calibrations by itself, starting with only loosely defined relationships. The geometry of the architecture is composed of distributed, interleaved combinations of actuator inputs. It is fault tolerant and uses analog processing. Learning is achieved by modifying the distributions of input weights in the architecture after each arm positioning. Modifications of the weights are made incrementally according to errors of consistency between the actuator signals used to orient the cameras and those used to move the arm. Computer simulations show that errors in the intended arm acutator signals after learning are an average 4.3% of the signal range, across all possible targets.

Published in:

Robotics and Automation. Proceedings. 1987 IEEE International Conference on  (Volume:4 )

Date of Conference:

Mar 1987