By Topic

A symmetric linear neural network that learns principal components and their variances

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Peper, F. ; Commun. Res. Lab., Japanese Minist. of Posts & Telecommun., Kobe, Japan ; Noda, H.

This paper proposes a linear neural network for principal component analysis whose weight vector lengths converge to the variances of the principal components in the input data. The neural network breaks the symmetry in its learning process by the differences in weight vector lengths and, as opposed to other linear neural networks described in literature, does not need to assume any asymmetries in its structure to extract the principal components. We prove the asymptotic stability of a stationary solution of the network's learning equation. Simulations show that the set of weight vectors converge to this solution. Comparison of convergence speeds shows that in the simulations the proposed neural network is about as fast as Sanger's generalized Hebbian algorithm (GHA) network, the weighted subspace rule network of Oja et al., and Xu's LMSER network (weighted linear version)

Published in:

Neural Networks, IEEE Transactions on  (Volume:7 ,  Issue: 4 )