By Topic

Efficient mapping of ANNs on hypercube massively parallel machines

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Malluhi, Q.M. ; Dept. of Comput. Sci., Jackson State Univ., MS, USA ; Bayoumi, M.A. ; Rao, T.R.N.

This paper presents a technique for mapping artificial neural networks (ANNs) on hypercube massively parallel machines. The paper starts by synthesizing a parallel structure, the mesh-of-appendixed-trees (MAT), for fast ANN implementation. Then, it presents a recursive procedure to embed the MAT structure into the hypercube topology. This procedure is used as the basis for an efficient mapping of ANN computations on hypercube systems. Both the multilayer feedforward with backpropagation (FFBP) and the Hopfield ANN models are considered. Algorithms to implement the recall and the training phases of the FFBP model as well as the recall phase of the Hopfield model are provided. The major advantage of our technique is high performance. Unlike the other techniques presented in the literature which require O(n) time, where N is the size of the largest layer, our implementation requires only O(log N) time. Moreover, it allows pipelining of more than one input pattern and thus further improves the performance

Published in:

Computers, IEEE Transactions on  (Volume:44 ,  Issue: 6 )