Skip to Main Content
The BP (backpropagation) process is a popular learning algorithm for neural networks. Despite of many successful applications, the BP process has some known drawbacks. These drawbacks stem from that the BP process is a gradient based optimization procedure without a linear search. In this paper, a new learning algorithm is presented based on a solution method of nonlinear equations. Compared with the former optimization procedure, the proposed method often converges faster to desired results. Newton's method is basically applied to solve the nonlinear equations. However, the major difficulty with Newton's method is that its convergence depends on an initial point. In order to assure a global convergence, independent of an initial point, the Homotopy continuation method is employed.