By Topic

Adding learning to cellular genetic algorithms for training recurrent neural networks

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Ku, K.W.C. ; Dept. of Electron. & Inf. Eng., Hong Kong Polytech., Kowloon, Hong Kong ; Man-Wai Mak ; Wan Chi Siu

This paper proposes a hybrid optimization algorithm which combines the efforts of local search (individual learning) and cellular genetic algorithms (GA) for training recurrent neural nets (RNN). Each RNN weight is encoded as a floating point number, and a concatenation of numbers forms a chromosome. Reproduction takes place locally in a square grid, each grid point representing a chromosome. Lamarckian and Baldwinian (1896) mechanisms for combining cellular GA and learning are compared. Different hill-climbing algorithms are incorporated into the cellular GA. These include the real-time recurrent learning (RTRL) and its simplified versions, and the delta rule. RTRL has been successively simplified by freezing some of the weights to form simplified versions. The delta rule, the simplest form of learning, has been implemented by considering the RNN as feedforward networks. The hybrid algorithms are used to train the RNN to solve a long-term dependency problem. The results show that Baldwinian learning is inefficient in assisting the cellular GA. It is conjectured that the more difficult it is for genetic operations to produce the genotypic changes that match the phenotypic changes due to learning, the poorer is the convergence of Baldwinian learning. Most of the combinations using the Lamarckian mechanism show an improvement in reducing the number of generations for an optimum network; however, only a few can reduce the actual time taken. Embedding the delta rule in the cellular GA is the fastest method. Learning should not be too extensive

Published in:

Neural Networks, IEEE Transactions on  (Volume:10 ,  Issue: 2 )