By Topic

Combining expert neural networks using reinforcement feedback for learning primitive grasping behavior

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
M. A. Moussa ; Sch. of Eng., Univ. of Guelph, Ont., Canada

This paper present an architecture for combining a mixture of experts. The architecture has two unique features: 1) it assumes no prior knowledge of the size or structure of the mixture and allows the number of experts to dynamically expand during training, and 2) reinforcement feedback is used to guide the combining/expansion operation. The architecture is particularly suitable for applications when there is a need to approximate a many-to-many mapping. An example of such a problem is the task of training a robot to grasp arbitrarily shaped objects. This task requires the approximation of a many-to-many mapping, since various configurations can be used to grasp an object, and several objects can share the same grasping configuration. Experiments in a simulated environment using a 28-object database showed how the algorithm dynamically combined and expanded a mixture of neural networks to achieve the learning task. The paper also presents a comparison with two other nonlearning approaches.

Published in:

IEEE Transactions on Neural Networks  (Volume:15 ,  Issue: 3 )