By Topic

Using localizing learning to improve supervised learning algorithms

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Weaver, S. ; Genomatix USA, Cincinnati, OH, USA ; Baird, L. ; Polycarpou, M.

Slow learning of neural-network function approximators can frequently be attributed to interference, which occurs when learning in one area of the input space causes unlearning in another area. To mitigate the effect of unlearning, this paper develops an algorithm that adjusts the weights of an arbitrary, nonlinearly parameterized network such that the potential for future interference during learning is reduced. This is accomplished by the reduction of a biobjective cost function that combines the approximation error and a term that measures interference. An analysis of the algorithm's convergence properties shows that learning with this algorithm reduces future unlearning. The algorithm can be used either during online learning or can be used to condition a network to have immunity from interference during a future learning stage. A simple example demonstrates how interference manifests itself in a network and how less interference can lead to more efficient learning. Simulations demonstrate how this new learning algorithm speeds up the training in various situations due to the extra cost function term

Published in:

Neural Networks, IEEE Transactions on  (Volume:12 ,  Issue: 5 )