Skip to Main Content
The training of perceptrons is discussed in the framework of nonsmooth optimization. An investigation of Rosenblatt's perceptron training rule shows that convergence or the failure to converge in certain situations can be easily understood in this framework. An algorithm based on results from nonsmooth optimization is proposed and its relation to the "constrained steepest descent" method is investigated. Numerical experiments verify that the "constrained steepest descent" algorithm may be further improved by the integration of methods from nonsmooth optimization.