Skip to Main Content
Traditional methods for solving multi-class problems, well-known as multi-SVMs, always combine certain decomposed binary-SVMs' results to formulate the final decision function. The prevalent methods are `one vs. one' and `one vs. all', which are based on a voting scheme among the binary classifiers to derive the winning class. However, they do not scale well with the data size and class number. Core Vector Machine (CVM) is a promising technique for scaling up a binary-SVM to handle large data sets with the greedy-expansion strategy, where the kernels are required to be normalized to ensure the equivalence between the kernel-induced spaces of SVM and Minimum Enclosing Ball (MEB). The idea proposed by CVM can also be utilized to formulate multi-SVM to MEB, by which we propose an approximate MEB algorithm with smaller core sets to handle multi-SVM. The experimental results on synthetic and benchmark data sets demonstrate the competitive performances of the method we proposed both on training time and training accuracy.