By Topic

High-performance neural network training on a computational cluster

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Lu, J. ; Wright State Univ., Dayton, OH, USA ; Yang, M. ; Bourbakis, N. ; Goldman, D.

In this paper, a high performance neural network for image segmentation is described which utilizes a Myrinet-based PC cluster to implement back propagation training on large data sets. Back propagation training of the very large neural network described here requires substantial computing resources that impact the time required for training. Work here shows that exploitation of modern microprocessor architectures as well as the distribution of processing over a high performance cluster can significantly impact training times required and effectively enlarge the envelop within which large neural networks remain computationally feasible. This work proposes a personal computer cluster solution that balances the distribution of computations while minimizing communication among cluster nodes. Results show that for a PC cluster with n nodes, the training of neural networks running on the cluster has a speedup of near (n-2) times when compared to training on a single node.

Published in:

High Performance Computing and Grid in Asia Pacific Region, 2004. Proceedings. Seventh International Conference on

Date of Conference:

20-22 July 2004