By Topic

A partial analysis of stochastic convergence in a generalized two-layer perceptron with backpropagation learning

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Vaughn, J.L. ; Dept. of Electr. & Comput. Eng., California Univ., Irvine, CA, USA ; Bershad, N.J. ; Shynk, J.J.

The authors study the stationary points of a two-layer perceptron which attempts to identify the parameters of a specific stochastic nonlinear system. The training sequence is modeled as the output of the nonlinear system, with an input comprising an independent sequence of zero mean Gaussian vectors with independent components. The training rule is a limiting case of backpropagation (to simplify the analysis). Equations are given which define the stationary points of the algorithm for an arbitrary output nonlinearity g(x). The solutions to these equations for the outer layer show that, for a continuous g(x), there is a unique solution for the outer layer weights for any given set of fixed hidden layer weights. These solutions do not necessarily yield zero error. However, if the hidden layer weights are also trained, the unique solution for zero error requires that the parameters of the two-layer perceptron exactly match that of the nonlinear system

Published in:

Neural Networks for Signal Processing [1992] II., Proceedings of the 1992 IEEE-SP Workshop

Date of Conference:

31 Aug-2 Sep 1992