By Topic

A constrained-optimization approach to training neural networks for smooth function approximation and system identification

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Di Muro, G. ; Mech. Eng., Duke Univ., Durham, NC ; Ferrari, S.

A constrained-backpropagation training technique is presented to suppress interference and preserve prior knowledge in sigmoidal neural networks, while new information is learned incrementally. The technique is based on constrained optimization, and minimizes an error function subject to a set of equality constraints derived via an algebraic training approach. As a result, sigmoidal neural networks with long term procedural memory (also known as implicit knowledge) can be obtained and trained repeatedly on line, without experiencing interference. The generality and effectiveness of this approach is demonstrated through three applications, namely, function approximation, solution of differential equations, and system identification. The results show that the long term memory is maintained virtually intact, and may lead to computational savings because the implicit knowledge provides a lasting performance baseline for the neural network.

Published in:

Neural Networks, 2008. IJCNN 2008. (IEEE World Congress on Computational Intelligence). IEEE International Joint Conference on

Date of Conference:

1-8 June 2008