By Topic

Accelerating parallel tangent learning for neural networks through dynamic self-adaptation

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Moallem, P. ; Dept. of Electron. Eng., Amirkabir Univ. of Technol., Tehran, Iran ; Faez, K.

In gradient based learning algorithms, momentum usually has an improving effect on convergence rate and reduces zigzagging phenomena but sometimes it causes the convergence rate to decrease. The parallel tangent (partan) gradient is used as a deflecting method to improve the convergence. In this paper, we modify the gradient partan algorithm for learning the neural networks by using two different learning rates, one for gradient search and the other for accelerating through parallel tangent, respectively. Moreover, the dynamic self-adaptation of the learning rate is used to improve the performance. In dynamic self adaptation, each learning rate is adapted locally to the cost function landscape and the previous learning rate. Finally we test the proposed algorithm, called the accelerated partan on various problems such as xor and encoders. We compare the results with those of dynamic self adaptation of learning rate and momentum.

Published in:

TENCON '97. IEEE Region 10 Annual Conference. Speech and Image Technologies for Computing and Telecommunications., Proceedings of IEEE  (Volume:1 )

Date of Conference:

4-4 Dec. 1997