By Topic

Probabilistic interpretations and Bayesian methods for support vector machines

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $33
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
P. Sollich ; Dept. of Math., King's Coll., London, UK

Support vector machines (SVMs) can be interpreted as maximum a posteriori solutions to inference problems with Gaussian process priors and appropriate likelihood functions. Focusing on the case of classification, the author shows first that such an interpretation gives a clear intuitive meaning to SVM kernels, as covariance functions of GP priors; this can be used to guide the choice of kernel. Next, a probabilistic interpretation allows Bayesian methods to be used for SVMs. Using a local approximation of the posterior around its maximum (the standard SVM solution), he discusses how the evidence for a given kernel and noise parameter can be estimated, and how approximate error bars for the classification of test points can be calculated

Published in:

Artificial Neural Networks, 1999. ICANN 99. Ninth International Conference on (Conf. Publ. No. 470)  (Volume:1 )

Date of Conference:

1999