Skip to Main Content
In this paper, a squared penalty term is added to the conventional error function to improve the generalization of neural networks. A weight boundedness theorem and two convergence theorems are proved for the gradient learning algorithm with penalty when it is used for training a two-layer feedforward neural network. To illustrate above theoretical findings, numerical experiments are conducted based on a linearly separable problem and simulation results are presented. The abstract goes here.