By Topic

Neural network simulation on shared-memory vector multiprocessors

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Chia-Jiu Wang ; Department of Electrical and Computer Engineering, University of Colorado, Colorado Springs, CO ; Chwan-Hwa Wu ; Sivasindaram, S.

We simulate three neural networks on a vector multiprocrssor. The training time can be reduced significantly especially when the training data size is large. These three neural networks are: 1) the feedforward network, 2) the recurrent network and 3) the Hopfield network. The training algorithms are programmed in such a way to best utilize 1) the inherent parallelism in neural computing, and 2) the vector and concurrent operations available on the parallel machine. To prove the correctness of parallelized training algorithms, each neural network is trained to perform a specific function. The feedforward network is trained to perform the Fourier transform, the recurrent network is trained to predict the solution of a delay differential equation, the Hopfield network is trained to solve the traveling salesman problem. The machine we experiment with is the Alliant FX/80.

Published in:

Supercomputing, 1989. Supercomputing '89. Proceedings of the 1989 ACM/IEEE Conference on

Date of Conference:

12-17 Nov. 1989