Skip to Main Content
The output weight optimization-hidden weight optimization (OWO-HWO) feedforward network training algorithm alternately solves linear equations for output weights and reduces a separate hidden layer error function with respect to hidden layer weights. Here, a new hidden layer error function is proposed which de-emphasizes net function errors that correspond to saturated activation function values. In addition, an adaptive learning rate based on the local shape of the error surface is used in hidden layer training. Faster learning convergence is experimentally verified.