Abstract:
The extended Kalman filter (EKF) algorithm has been used for training neural networks. Like the backpropagation (BP) algorithm, the EKF algorithm can be in pattern or bat...Show MoreMetadata
Abstract:
The extended Kalman filter (EKF) algorithm has been used for training neural networks. Like the backpropagation (BP) algorithm, the EKF algorithm can be in pattern or batch form. But the batch form EKF is different from the gradient averaging in standard batch mode BP. The paper compares backpropagation and extended Kalman filter in pattern and batch forms for neural network trainings. For each comparison between the batch-mode EKF and BP, the same batch data size is used. An overall RMS error computed for all training examples is adopted in the paper for the comparison, which is found to be especially beneficial to pattern mode EKF and BP trainings. Simulation of the network training with different batch data sizes shows that EKF and BP in batch-form usually are more stable and can obtain smaller RMS error than in pattern-form. However, too large batch data size can let the BP trap to a "local minimum", and can also reduces the network training effect of the EKF algorithm.
Published in: IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)
Date of Conference: 15-19 July 2001
Date Added to IEEE Xplore: 07 August 2002
Print ISBN:0-7803-7044-9
Print ISSN: 1098-7576