By Topic

Real-time application of neural networks for sensor-based control of robots with vision

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
W. T. Miller ; Dept. of Electr. & Comput. Eng., New Hampshire Univ., Durham, NH, USA

A practical neural network-based learning control system is described that is applicable to complex robotic systems involving multiple feedback sensors and multiple command variables. In the controller, one network is used to learn to reproduce the nonlinear relationship between the sensor outputs and the system command variables over particular regions of the system state space. The learned information is used to predict the command signals required to produce desired changes in the sensor outputs. A second network is used to learn to reproduce the nonlinear relationship between the system command variables and the changes in the video sensor outputs. The learned information from this network is then used to predict the next set of video parameters, effectively compensating for the image processing delays. The results of learning experiments using a General Electric P-5 manipulator are presented. These experiments involved control of the position and orientation of an object in the field of view of a video camera mounted on the end of the robot arm, using moving objects with arbitrary orientation relative to the robot. No a priori knowledge of the robot kinematics or of the object speed of orientation relative to the robot was assumed. Image parameter uncertainty and control system tracking error in the video image were found to converge to low values within a few trials

Published in:

IEEE Transactions on Systems, Man, and Cybernetics  (Volume:19 ,  Issue: 4 )