Skip to Main Content
Genetic algorithms (GA) has been extensively applied to address the shortcomings of gradient based leaning methods in training feedforward neural networks (NN). However, the complicated properties of NN training, such as context dependence problem between neurons and permutation problem of genetic representation, will cause difficulties in efficiently implementing conventional GAs. In the present study, a novel hybrid GA design is proposed to overcome these problems. First, for the sake of eliminating the context dependence, the new method adopts GA and least squares estimator to separately optimize the neurons in hidden and output layers. Second, in order to completely avoid the permutation problem, the proposed GA design employs two heterogeneous populations that evolve in company but respectively learn the optimal combinations and parameters of hidden neuron. Finally, experimental studies encouragingly show that, in comparison with five well-known conventional approaches, the new training method displays a much better approximation and generalization capabilities in nonlinear static and dynamic modeling, especially for the observed signals corrupted with large measurement noises.