By Topic

Training algorithm based on Newton's method with dynamic error control

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Huang, S.J. ; Sch. of Electr. & Electron. Eng., Nanyang Technol. Univ., Singapore ; Koh, S.N. ; Tang, H.K.

The use of Newton's method with dynamic error control as a training algorithm for the backpropagation (BP) neural network is considered. Theoretically, it can be proved that Newton's method is convergent in the second-order while the most widely used steepest-descent method is convergent in the first-order. This suggests that Newton's method might be a faster algorithm for the BP network. The updating equations of the two methods are analyzed in detail to extract some important properties with reference to the error surface characteristics. The common benchmark XOR problem is used to compare the performance of the methods

Published in:

Neural Networks, 1992. IJCNN., International Joint Conference on  (Volume:3 )

Date of Conference:

7-11 Jun 1992