By Topic

A parallel bus architecture for artificial neural networks

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
C. Cantrell ; Dept. of Electr. Eng., Alabama Univ., Tuscaloosa, AL, USA ; L. Wurtz

A new design for a bus architecture for stochastic artifical networks is discussed. A recent VLSI implementation connects many neurons by broadcasting each neuron's address and activation level in turn for all other neurons to process. Such a scheme requires N steps to completely connect N neurons. The proposed architecture uses stochastic activation levels. Since these outputs are simpler, there is room on the global bus for several neurons to fire in parallel. Each neuron processes all outputs in a set of neurons at once, reducing the number of addressing steps on the bus as well as the actual size of the neuron addressing field. This neuron grouping is especially applicable to backpropagation networks. A simulator for the architecture was written and tested

Published in:

Southeastcon '93, Proceedings., IEEE

Date of Conference:

4-7 Apr 1993