Skip to Main Content
This work introduces a scheme of layered neural network training, which incorporates a dynamical model alteration during training, and regularization of the features extracted in the hidden layer units. So far, use of Model Switching (MS), which is a simultaneous search scheme for an optimal model and parameter, proved to improve training efficiency and generalization ability as a side effect. In MS, the operation to switch the network to a different model involve orthogonalization of the features extracted in the hidden layer. Assuming that the orthogonalization contributes to the observed merits, joint use of MS and orthogonalization of the hidden layer feature by introducing a regularization term in the training, is introduced. The network trained by the proposed training scheme is applied to a pattern recognition problem, and some improvement in training efficiency and generalization ability were observed.