Skip to Main Content
In this paper, a novel methodology called a reference model approach to stability analysis of neural networks is proposed. The core of the new approach is to study a neural network model with reference to other related models, so that different modeling approaches can be combinatively used and powerfully cross-fertilized. Focused on two representative neural network modeling approaches (the neuron state modeling approach and the local field modeling approach), we establish a rigorous theoretical basis on the feasibility and efficiency of the reference model approach. The new approach has been used to develop a series of new, generic stability theories for various neural network models. These results have been applied to several typical neural network systems including the Hopfield-type neural networks, the recurrent back-propagation neural networks, the BSB-type neural networks, the bound-constraints optimization neural networks, and the cellular neural networks. The results obtained unify, sharpen or generalize most of the existing stability assertions, and illustrate the feasibility and power of the new method.