Abstract:
Interference in neural networks occurs when learning in one area of the input space causes unlearning in another area. These interference problems are especially prevalen...Show MoreMetadata
Abstract:
Interference in neural networks occurs when learning in one area of the input space causes unlearning in another area. These interference problems are especially prevalent in online applications where learning is directed by training data that is currently available rather than some optimal presentation schedule of the training data. We propose a procedure that enhances a learning algorithm by giving it the ability to make the network more local and hence, less likely to suffer from future interference. Through simulations using radial basis function (RBF) networks and sigmoidal multi-layer perceptron (MLP) networks it is shown that by optimizing a new cost function that penalizes non-locality, the approximation error is reduced more quickly than with standard backpropagation.
Date of Conference: 17-17 September 1998
Date Added to IEEE Xplore: 06 August 2002
Print ISBN:0-7803-4423-5
Print ISSN: 2158-9860