By Topic

Robust neural network training using partial gradient probing

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Manic, M. ; Dept. of Comput. Sci., Idaho Univ., Boise, ID, USA ; Wilamowski, B.

Our proposed algorithm features fast and robust convergence for one hidden layer neural networks. Search for weights is done only in the input layer i.e. on compressed network. Only forward propagation is performed with second layer trained automatically with pseudo-inversion training, for all patterns at once. Last layer training is also modified to handle nonlinear problems, not presented here. Through iterations gradient is randomly probed towards each weight set dimension. The algorithm further features serious of modifications, such as adaptive network parameters that resolve problems like total error fluctuations, slow convergence, etc. For testing of this algorithm one of most popular benchmark tests - parity problems were chosen. Final version of the proposed algorithm typically provides a solution for various tested parity problems in less than ten iterations, regardless of initial weight set. Performance of the algorithm on parity problems is tested and illustrated by figures.

Published in:

Industrial Informatics, 2003. INDIN 2003. Proceedings. IEEE International Conference on

Date of Conference:

21-24 Aug. 2003