Skip to Main Content
Support vector machines (SVM) are currently one of the classification systems most used in pattern recognition and data mining because of their accuracy and generalization capability. However, when dealing with very complex classification tasks where different errors bring different penalties, one should take into account the overall classification cost produced by the classifier more than its accuracy. It is thus necessary to provide some methods for tuning the SVM on the costs of the particular application. Depending on the characteristics of the cost matrix, this can be done during or after the learning phase of the classifier. In this paper we introduce two optimization schemes based on the two possible approaches and compare their performance on various data sets and kernels. The first experimental results show that both the proposed schemes are suitable for tuning SVM in cost-sensitive applications.