Skip to Main Content
Due to the chaotic nature of multilayer perceptron training, training error usually fails to be a monotonically non-increasing function of the number of hidden units. An initialization and training methodology is developed to significantly increase the probability that the training error is monotonically non-increasing. First a structured initialization generates the random weights in a particular order. Second, larger networks are initialized using weights from smaller trained networks. Lastly, the required number of iterations is calculated as a function of network size.