Skip to Main Content
This technical note presents theoretical analysis and simulation results on the performance of a classic gradient neural network (GNN), which was designed originally for constant matrix inversion but is now exploited for time-varying matrix inversion. Compared to the constant matrix-inversion case, the gradient neural network inverting a time-varying matrix could only approximately approach its time-varying theoretical inverse, instead of converging exactly. In other words, the steady-state error between the GNN solution and the theoretical/exact inverse does not vanish to zero. In this technical note, the upper bound of such an error is estimated firstly. The global exponential convergence rate is then analyzed for such a Hopfield-type neural network when approaching the bound error. Computer-simulation results finally substantiate the performance analysis of this gradient neural network exploited to invert online time-varying matrices.