By Topic

Tikhonov-based Regularization of a Global Optimum Approach of One-layer Neural Networks with Fixed Transfer Function by Convex Optimization

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Dik Kin Wong ; CSLI, Stanford Univ., CA ; Guimaraes, M.P. ; Uy, E.T. ; Suppes, P.

Regularization is useful for extending learning models to be effective for classifications. Given the success of regularized-perceptron-based (one-layer neural network) methods, a similar kind of regularization is introduced for two global-optimum approaches recently proposed by Castillo et al., which combined the degree of freedom of using nonlinear transfer functions with the computational efficiency of solving complex problems. We focused on the two approaches that used sigmoid transfer functions. The first linear approach involved solving a set of linear equations, while the second min-max approach was reduced to a linear programming problem. We introduced regularization in such a way that the first linear approach remained linear and had a close form solution, while the second min-max approach was converted from a linear programming into a quadratic programming problem. Electroencephalography recordings were used to show how classifications could be improved

Published in:

Neural Networks and Brain, 2005. ICNN&B '05. International Conference on  (Volume:3 )

Date of Conference:

13-15 Oct. 2005