Skip to Main Content
Novel feature-selection methods are proposed for multi-class support-vector-machine (SVM) learning. They are based on two new feature-ranking criteria. Both criteria, collectively termed multi-class feature-based sensitivity of posterior probabilities (MFSPP), evaluate the importance of a feature by computing the aggregate value, over the feature space, of the absolute difference of the probabilistic outputs of the multi-class SVM with and without the feature. In their original form, the criteria are computationally expensive and three approximations, MFSPP1-MFSPP3, are then proposed. In a carefully controlled experimental study, all these three approximations are tested on various artificial and benchmark datasets. Results show that they outperform the multi-class versions of support-vector-machine recursive feature-elimination method (SVM-RFE) and other standard filtering methods, with one of the three proposed approximations having a slight edge over the other two. Based on the experiments, the advantage of the proposed methods is particularly significant when training dataset is sparse.