Skip to Main Content
This paper focuses on the fundamental interpretation of various kinds of SVM which achieve comparable performance in dealing with different data sets of two classes in machine learning. Based on the fundamental of original SVM, L2RL2SVM, C-SVC, L1RL2SVM and ν-SVC can linearly handle the large-scale high-dimension data well. When the training data can not be separated by linear classifiers, KSVM is a crucial technique to overcome this shortage in a supervised-learning framework. In a semi-supervised learning framework, LapSVM, is the state-of-the-art technique which better handles both generalization of the classifier and out-of-sample problem. Experiments, on four common datasets of two classes with eleven algorithms, show that accuracies, utilizing different algorithms in different data sets, are different and prior geometrical information from training data is necessary to improve the accuracy when the machine learns to select a optimum model either in a supervised-learning framework or in a semi-supervised-learning framework. By analyzing the results of the experiments, we found that the distributional characteristics of each data set can be observed.