Skip to Main Content
The recursive least squares (RLS) learning algorithm for multilayer feedforward neural networks uses a sigmoid nonlinearity at node outputs. It is shown that by using a piecewise linear function at node outputs, the algorithm becomes faster. The modified algorithm improves computational efficiency and by preserving matrix symmetry it is possible to avoid explosive divergence, which is normally seen in the conventional RLS algorithm due to the finite precision effects. Also the use of this piecewise linear function avoids the approximation, which is otherwise necessary in the derivation of the conventional algorithm with sigmoid nonlinearity. Simulation results on the XOR problem, 4-2-4 encoder and function approximation problem indicate that the modified algorithm reduces the occurrence of local minima and improves the convergence speed compared to the conventional RLS algorithm. A nonlinear system identification and control problem is considered to demonstrate the application of the algorithm to complex problems.