By Topic

Mapping neural nets onto a massively parallel architecture: a defect-tolerance solution

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Distante, F. ; Dipartimento di Elettronica, Politecnico di Milano, Italy ; Sami, M. ; Stefanelli, R. ; Storti-Gajani, G.

The problem of mapping neural nets onto massively parallel architectures is considered. The solution examined, based upon regular array structures, can support the mapping of any neural graph. In particular, the case of feed-forward multilayered nets is analyzed, and is proven that in this case the mapping suggested is easily implemented and optimizes a number of relevant figures of merit. The structure of nodes, I/O ports, and switches is taken into account with reference to the neural net case. It is seen that the claims of inherent fault tolerance for neural nets are not actually kept for all classes of faults of a digital implementation; moreover, it is considered that end-of-production defects require restructuring to grant nominal initial operation. An efficient and straightforward solution to the defect-tolerance problem is presented, allowing the most limited redundancy versus good harvesting characteristics

Published in:

Proceedings of the IEEE  (Volume:79 ,  Issue: 4 )