By Topic

Modified recursive least squares (RLS) algorithm for neural networks using piecewise linear function

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $31
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Gokhale, A.P. ; Dept. of Electron. & Comput. Sci., Visvesvaraya Nat. Inst. of Technol., Nagpur, India ; Nawghare, P.M.

The recursive least squares (RLS) learning algorithm for multilayer feedforward neural networks uses a sigmoid nonlinearity at node outputs. It is shown that by using a piecewise linear function at node outputs, the algorithm becomes faster. The modified algorithm improves computational efficiency and by preserving matrix symmetry it is possible to avoid explosive divergence, which is normally seen in the conventional RLS algorithm due to the finite precision effects. Also the use of this piecewise linear function avoids the approximation, which is otherwise necessary in the derivation of the conventional algorithm with sigmoid nonlinearity. Simulation results on the XOR problem, 4-2-4 encoder and function approximation problem indicate that the modified algorithm reduces the occurrence of local minima and improves the convergence speed compared to the conventional RLS algorithm. A nonlinear system identification and control problem is considered to demonstrate the application of the algorithm to complex problems.

Published in:

Circuits, Devices and Systems, IEE Proceedings -  (Volume:151 ,  Issue: 6 )