By Topic

Comparative analysis of backpropagation and extended Kalman filter in pattern and batch forms for training neural networks

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
Shuhui Li ; Dept. of Electr. Eng. & Comput. Sci., Texas A&M Univ., Kingsville, TX, USA

The extended Kalman filter (EKF) algorithm has been used for training neural networks. Like the backpropagation (BP) algorithm, the EKF algorithm can be in pattern or batch form. But the batch form EKF is different from the gradient averaging in standard batch mode BP. The paper compares backpropagation and extended Kalman filter in pattern and batch forms for neural network trainings. For each comparison between the batch-mode EKF and BP, the same batch data size is used. An overall RMS error computed for all training examples is adopted in the paper for the comparison, which is found to be especially beneficial to pattern mode EKF and BP trainings. Simulation of the network training with different batch data sizes shows that EKF and BP in batch-form usually are more stable and can obtain smaller RMS error than in pattern-form. However, too large batch data size can let the BP trap to a “local minimum”, and can also reduces the network training effect of the EKF algorithm

Published in:

Neural Networks, 2001. Proceedings. IJCNN '01. International Joint Conference on  (Volume:1 )

Date of Conference: