Skip to Main Content
We introduce a new growing neural architecture together with a learning paradigm which uses radial basis functions (RBFs) and principal component analysis (PCA). In the first layer linear neurons perform singular value decomposition in order to decorrelate the input data. For each rotated axis (principal component) the network provides a separate group of 1D Gaussian functions. In a following layer pi-neurons are used to combine the 1D Gaussians to multidimensional RBFs. The output layer is linear. The learning algorithm follows the ideas introduced by the coarse-coding resource allocating network (Deco and Ebmeyer, 1993). Simulations using handwritten digits demonstrate the performance and advantages of this algorithm, which is optimal reduction of complexity.