By Topic

SoftDoubleMaxMinOver: Perceptron-Like Training of Support Vector Machines

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Thomas Martinetz ; Inst. for Neuro- & Bioinf., Univ. of Lubeck, Lubeck, Germany ; Kai Labusch ; Daniel Schneegass

The well-known MinOver algorithm is a slight modification of the perceptron algorithm and provides the maximum-margin classifier without a bias in linearly separable two-class classification problems. DoubleMinOver as an extension of MinOver, which now includes a bias, is introduced. An O(t-1) convergence is shown, where t is the number of learning steps. The computational effort per step increases only linearly with the number of patterns. In its formulation with kernels, selected training patterns have to be stored. A drawback of MinOver and DoubleMinOver is that this set of patterns does not consist of support vectors only. DoubleMaxMinOver, as an extension of DoubleMinOver, overcomes this drawback by selectively forgetting all nonsupport vectors after a finite number of training steps. It is shown how this iterative procedure that is still very similar to the perceptron algorithm can be extended to classification with soft margins and be used for training least squares support vector machines (SVMs). On benchmarks, the SoftDoubleMaxMinOver algorithm achieves the same performance as standard SVM software.

Published in:

IEEE Transactions on Neural Networks  (Volume:20 ,  Issue: 7 )