Skip to Main Content
Our proposed algorithm features fast and robust convergence for one hidden layer neural networks. Search for weights is done only in the input layer i.e. on compressed network. Only forward propagation is performed with second layer trained automatically with pseudo-inversion training, for all patterns at once. Last layer training is also modified to handle nonlinear problems, not presented here. Through iterations gradient is randomly probed towards each weight set dimension. The algorithm further features serious of modifications, such as adaptive network parameters that resolve problems like total error fluctuations, slow convergence, etc. For testing of this algorithm one of most popular benchmark tests - parity problems were chosen. Final version of the proposed algorithm typically provides a solution for various tested parity problems in less than ten iterations, regardless of initial weight set. Performance of the algorithm on parity problems is tested and illustrated by figures.