Skip to Main Content
This paper presents a novel learning algorithm for training locally recurrent globally feedforward neural networks. The training task is formulated as a constrained optimization problem, whose objective is twofold: (i) minimization of an error measure, leading to successful approximation of the input/output mapping and (ii) optimization of an additional functional, which aims at accelerating the learning process. Simulation results on a benchmark identification problem demonstrate that, compared to other learning schemes, the proposed algorithm has enhanced qualities, including improved speed of convergence, accuracy and robustness.
Neural Networks, 2005. IJCNN '05. Proceedings. 2005 IEEE International Joint Conference on (Volume:2 )
Date of Conference: 31 July-4 Aug. 2005