Skip to Main Content
We present results from a comparative empirical study of two methods for constructing support vector machines (SVMs). The first method is the conventional one based on the quadratic programming approach, which builds the optimal separating hyperplane maximizing the margin between two classes (SVM-Q). The second method is based on the linear programming approach suggested by Vapnik to build a separating hyperplane with the minimum number of support vectors (SVM-L). Using synthetic data from two classes, we compare the classification performance of these SVMs, with a geometrical comparison of their separating hyperplanes and support vectors. We show that both classifiers achieve practically identical classification accuracy and generalization performance. However, SVM-L has many fewer support vectors than SVM-Q. We also prove that, in contrast to SVM-Q, which selects support vectors from the margin between two classes, support vectors of SVM-L lie on the furthermost borders of the classes, at the maximum distance from the opposite class.