Skip to Main Content
Neural network models performing principal component analysis have been considered. First we discuss the convergence of Sanger's heuristically developed two-layered neural network (1989) based on "generalized Hebbian algorithm". Then we propose a three-layered hybrid network model in which "generalized Hebbian algorithm" is used as the learning rule for the weights between input and hidden layers and the anti-Hebbian rule for hidden and output layers, respectively. We provides the conditions for finding the principal components by the proposed network models. We show that the convergence can be improved by the hybrid network models than Sanger's network.