By Topic

Performance Analysis of Gradient Neural Network Exploited for Online Time-Varying Matrix Inversion

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Yunong Zhang ; Sch. of Inf. Sci. & Technol., Sun Yat-Sen Univ., Guangzhou, China ; Ke Chen ; Hong-Zhou Tan

This technical note presents theoretical analysis and simulation results on the performance of a classic gradient neural network (GNN), which was designed originally for constant matrix inversion but is now exploited for time-varying matrix inversion. Compared to the constant matrix-inversion case, the gradient neural network inverting a time-varying matrix could only approximately approach its time-varying theoretical inverse, instead of converging exactly. In other words, the steady-state error between the GNN solution and the theoretical/exact inverse does not vanish to zero. In this technical note, the upper bound of such an error is estimated firstly. The global exponential convergence rate is then analyzed for such a Hopfield-type neural network when approaching the bound error. Computer-simulation results finally substantiate the performance analysis of this gradient neural network exploited to invert online time-varying matrices.

Published in:

Automatic Control, IEEE Transactions on  (Volume:54 ,  Issue: 8 )