By Topic

Transputers and neural networks: an analysis of implementation constraints and performance

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
J. M. J. Murre ; Unit. of Exp. & Theor. Psychol., Leiden Univ., Netherlands

A performance analysis is presented that focuses on the achievable speedup of a neural network implementation and on the optimal size of a processor network (transputers or multicomputers that communicate in a comparable manner). For fully and randomly connected neural networks the topology of the processor network can only have a small, constant effect on the iteration time. With randomly connected neural networks, even severely limiting node fan-in has only a negligible effect on decreasing the communication overhead. The class of modular neural networks is studied as a separate case which is shown to have better implementation characteristics. On the basis of implementation constraints, it is argued that randomly connected neural networks cannot be realistic models of the brain

Published in:

IEEE Transactions on Neural Networks  (Volume:4 ,  Issue: 2 )