By Topic

Parallel architectures for artificial neural nets

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
S. Y. Kung ; Dept. of Electr. Eng., Princeton Univ., NJ, USA ; J. N. Hwang

The authors advocate digital VLSI architectures for implementing a wide variety of artificial neural nets (ANNs). A programmable systolic array is proposed, which maximizes the strength of VLSI in terms of intensive and pipelined computing, yet circumvents its limitation on communication. The array is meant to be more general-purpose than most other ANN architectures proposed. It may be used for a variety of algorithms in both the search and learning phases of ANNs, e.g. single-layer recurrent nets (e.g. Hopfield nets) and multilayer feedforward nets (e.g. perceptron-like nets). Although design considerations for the learning phase are somewhat more involved, the proposed design can accommodate very well several key learning rules, such as Hebbian, delta, competitive, and back-propagation learning rules. Compared to analog neural circuits, the proposed systolic architecture offers higher flexibilities, higher precision, and full pipelineability.<>

Published in:

Neural Networks, 1988., IEEE International Conference on

Date of Conference:

24-27 July 1988