By Topic

Learning in parallel distributed processing networks: Computational complexity and information content

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Kolen, J.F. ; Dept. of Comput. & Inf. Sci., Ohio State Univ., Columbus, OH, USA ; Goel, A.K.

A set of experiments that precisely identify the power and limitations of the method of back-propagation is reported. The experiment on learning to compute the exclusive-OR function suggests that the computational efficiency of learning by the method of back-propagation depends on the initial weights in the network. The experiment on learning to play tic-tac-toe suggests that the information content of what is learned by the back-propagation method is dependent on the initial abstractions in the network. It also suggests that these abstractions are a major source of power for learning in parallel distributed processing networks. In addition, it is shown that the learning task addressed by connectionist methods, including the back-propagation method, is computationally intractable. These experimental and theoretical results strongly indicate that current connectionist methods may be too limited for the complex task of learning they seek to solve. It is proposed that the power of neural networks may be enhanced by developing task-specific connectionist methods

Published in:

Systems, Man and Cybernetics, IEEE Transactions on  (Volume:21 ,  Issue: 2 )