Abstract:
Asymmetric margin error costs for positive and negative examples are often cited as an efficient heuristic compensating for unrepresentative priors in training support ve...Show MoreMetadata
Abstract:
Asymmetric margin error costs for positive and negative examples are often cited as an efficient heuristic compensating for unrepresentative priors in training support vector classifiers. In this paper we show that this heuristic is well justified via simple re-sampling ideas applied to the dual Lagrangian defining the 1-norm soft-margin support vector machine. This observation also provides a simple expression for the asymptotically optimal ratio of margin error penalties, eliminating the need for the trial-and-error experimentation normally encountered. This method allows the use of a smaller, balanced training data set in problems characterised by widely disparate prior probabilities, reducing the training time. The usefulness of this method is then demonstrated on a real world benchmark problem.
Published in: IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)
Date of Conference: 15-19 July 2001
Date Added to IEEE Xplore: 07 August 2002
Print ISBN:0-7803-7044-9
Print ISSN: 1098-7576