By Topic

An improved backpropagation neural network learning

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
Stoyanov, I.P. ; Inst. for Inf. Technol., Bulgarian Acad. of Sci., Sofia, Bulgaria

The backpropagation neural network (BPNN) is a well known and widely used mathematical model for pattern recognition, nonlinear function approximation, time series prediction, etc. There are many applications which require large input and hidden layers. In such cases, the learning process takes a long time. Many authors propose different methods to reduce the learning time, through convergence improvement. In the present report, a topological method is proposed to cope with this problem. The neurons whose weights tend toward constant values at the learning process are fixed and they are not learned till the end of the learning time. The neural network learning stops either when the error rate achieves an appropriate minimum, or when the learning time overcomes a constant value. Experiments demonstrate that this method decreases the learning time with about 50%

Published in:

Pattern Recognition, 1996., Proceedings of the 13th International Conference on  (Volume:4 )

Date of Conference:

25-29 Aug 1996