By Topic

Efficient mapping of randomly sparse neural networks on parallel vector supercomputers

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Muller, S.M. ; Dept. of Comput. Sci., Saarlandes Univ., Saarbrucken, Germany ; Gomes, B.

This paper presents efficient mappings of large sparse neural networks on a distributed-memory MIMD multicomputer with high performance vector units. We develop parallel vector code for an idealized network and analyze its performance. Our algorithms combine high performance with a reasonable memory requirement. Due to the high cost of scatter/gather operations, generating high performance parallel vector code requires careful attention to details of the representation. We show that vectorization can nevertheless more than quadruple the performance on our modeled supercomputer. Pushing several patterns at a time through the network (batch mode) exposes an extra degree of parallelism which allows us to improve the performance by an additional factor of 4. Vectorization and batch updating therefore yield an order of magnitude performance improvement

Published in:

Parallel and Distributed Processing, 1994. Proceedings. Sixth IEEE Symposium on

Date of Conference:

26-29 Oct 1994