Cart (Loading....) | Create Account
Close category search window
 

A parallel implementation of the batch backpropagation training of neural networks

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

The purchase and pricing options are temporarily unavailable. Please try again later.
2 Author(s)
Novokhodko, A. ; Dept. of Electr. & Comput. Eng., Missouri Univ., Rolla, MO, USA ; Valentine, S.

Neural networks, being naturally parallel, inspire researchers to seek efficient implementations for various parallel architectures. However, the vast fine-grain parallelism of many tightly connected simple nodes poses a problem for traditional parallel computing on a small number of powerful processors. One approach is to parallelize not the neural network itself, but the process of its training, which is the most numerically intensive part in neural network computing. During the batch training each input pattern/signal is presented to the neural network, a response is obtained and evaluated, and a direction of network parameters change (the cost function gradient) is calculated using the backpropagation algorithm. The goal is obtained parallelizing MATLAB's matrix multiplication routine. The message passing interface is used for the parallel implementation. The implementation allows to parallelize generally non-parallel procedures offered by MATLAB

Published in:

Neural Networks, 2001. Proceedings. IJCNN '01. International Joint Conference on  (Volume:3 )

Date of Conference:

2001

Need Help?


IEEE Advancing Technology for Humanity About IEEE Xplore | Contact | Help | Terms of Use | Nondiscrimination Policy | Site Map | Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest professional association for the advancement of technology.
© Copyright 2014 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.