By Topic

Training recurrent networks using the extended Kalman filter

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
Williams, Ronald J. ; Coll. of Comput. Sci., Northeastern Univ., Boston, MA, USA

The author describes some relationships between the extended Kalman filter (EKF) as applied to recurrent net learning and some simpler techniques that are more widely used. In particular, making certain simplifications to the EKF gives rise to an algorithm essentially identical to the real-time recurrent learning (RTRL) algorithm. Since the EKF involves adjusting unit activity in the network, it also provides a principled generalization of the teacher forcing technique. Preliminary simulation experiments on simple finite-state Boolean tasks indicated that the EKF can provide substantial speed-up in number of time steps required for training on such problems when compared with simpler online gradient algorithms. The computational requirements of the EKF are steep, but scale with network size at the same rate as RTRL

Published in:

Neural Networks, 1992. IJCNN., International Joint Conference on  (Volume:4 )

Date of Conference:

7-11 Jun 1992