By Topic

Exploiting multiple degrees of BP parallelism on the highly parallel computer AP1000

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $31
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

5 Author(s)

During the few last years, several neurocomputers have been developed, but still general-purpose computers are an alternative to these special-purpose computers. This paper describes a mapping of the backpropagation (BP) learning algorithm onto a large 2D torus architecture. The parallel algorithm was implemented on a 512-processor AP1000 and evaluated using NETtalk and other applications. To obtain high speedup, we have suggested an approach to combine the multiple parallel degrees of the algorithm (training set parallelism, node parallelism and pipelining of the training patterns). For a large number of processors, we obtained a performance of 81 million weight updates per second using 512 processors, when running the NETtalk network. Our results show that to obtain the best performance on a large number of processors, a combination of multiple degrees of parallelism in the backpropagation algorithm ought to be considered

Published in:

Artificial Neural Networks, 1995., Fourth International Conference on

Date of Conference:

26-28 Jun 1995