By Topic

Convergence properties and stationary points of a perceptron learning algorithm

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Shynk, J.J. ; Dept. of Electr. & Comput. Eng., California Univ., Santa Barbara, CA, USA ; Roy, S.

An analysis of the stationary (convergence) points of an adaptive algorithm that adjusts the perceptron weights is presented. This algorithm is identical in form to the least-mean-square (LMS) algorithm, except that a hard limiter is incorporated at the output of the summer. The algorithm is described in detail, a simple two-input example is presented, and some of its convergence properties are illustrated. When the input of the perceptron is a Gaussian random vector, the stationary points of the algorithm are not unique and they depend on the algorithm step size and the momentum constant. The stationary points of the algorithm are presented, and the properties of the adaptive weight vector near convergence are discussed. Computer simulations that verify the analysis are given

Published in:

Proceedings of the IEEE  (Volume:78 ,  Issue: 10 )