By Topic

Implementing online natural gradient learning: problems and solutions

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
Weishui Wan ; CED Syst. Corp., Tokyo, Japan

The online natural gradient learning is an efficient algorithm to resolve the slow learning speed and poor performance of the standard gradient descent method. However, there are several problems to implement this algorithm. In this paper, we proposed a new algorithm to solve these problems and then compared the new algorithm with other known algorithms for online learning, including Almeida-Langlois-Amaral-Plakhov algorithm (ALAP), Vario-η, local adaptive learning rate and learning with momentum etc., using sample data sets from Proben1 and normalized handwritten digits, automatically scanned from envelopes by the U.S. Postal Services. The strong and weak points of these algorithms were analyzed and tested empirically. We found out that using the online training error as the criterion to determine whether the learning rate should be changed or not is not appropriate and our new algorithm has better performance than other existing online algorithms.

Published in:

Neural Networks, IEEE Transactions on  (Volume:17 ,  Issue: 2 )