By Topic

Convergence of Gradient Descent Algorithm for Diagonal Recurrent Neural Networks

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

5 Author(s)
Dongpo Xu ; Dept. of Appl. Math., Dalian Univ. of Technol., Dalian ; Zhengxue Li ; Wei Wu ; Xiaoshuai Ding
more authors

Recurrent neural networks have been used for analysis and prediction of time series. This paper is concerned with the convergence of the gradient descent algorithm for training the diagonal recurrent neural networks. The existing convergence results consider the online gradient training algorithm based on the assumption that a very large number of (or infinitely many in theory) training samples of the time series are available, and accordingly the stochastic process theory is used to establish some convergence results of probability nature. In this paper, we consider the case that only a small number of training samples of the time series are available such that the stochastic treatment of the problem is no longer appropriate. Instead, we use the offline gradient descent algorithm for training the diagonal recurrent neural network, and we accordingly prove some convergence results of deterministic nature. The monotonicity of the error function in the iteration is also guaranteed.

Published in:

Bio-Inspired Computing: Theories and Applications, 2007. BIC-TA 2007. Second International Conference on

Date of Conference:

14-17 Sept. 2007