Skip to Main Content
The online natural gradient learning is an efficient algorithm to resolve the slow learning speed and poor performance of the standard gradient descent method. However, there are several problems to implement this algorithm. In this paper, we proposed a new algorithm to solve these problems and then compared the new algorithm with other known algorithms for online learning, including Almeida-Langlois-Amaral-Plakhov algorithm (ALAP), Vario-η, local adaptive learning rate and learning with momentum etc., using sample data sets from Proben1 and normalized handwritten digits, automatically scanned from envelopes by the U.S. Postal Services. The strong and weak points of these algorithms were analyzed and tested empirically. We found out that using the online training error as the criterion to determine whether the learning rate should be changed or not is not appropriate and our new algorithm has better performance than other existing online algorithms.
Date of Publication: March 2006