By Topic

Flexible data parallel training of neural networks using MIMD-Computers

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Besch, H. ; German Nat. Res. Center for Comput. Sci., Berlin, Germany ; Pohl, H.W.

An approach to flexible and efficient data parallel simulation of neural networks on large scale MIMD machines is presented. We regard the exploitation of the inherent parallelism of neural network models as necessary if larger networks and training data sets respectively are to be considered. Nevertheless it is essential to provide the flexibility for investigating various training algorithms or creating new ones without intimate knowledge of the underlaying hardware architecture and communication subsystem. We therefore encapsulated functional units being substantial with respect to the parallel execution. Based on these components even complex training algorithms can be formulated as a sequential program while the details of the parallelization are transparent. Communication tasks are performed very efficiently by using a distributed logarithmic tree. This logical structure additionally allows a direct mapping of the algorithm on various important parallel architectures. Finally a theoretical time complexity model is given and the correspondence to empirical data is shown

Published in:

Parallel and Distributed Processing, 1995. Proceedings. Euromicro Workshop on

Date of Conference:

25-27 Jan 1995