Skip to Main Content
Based on L-2 Support Vector Machines(SVMs), Vapnik and Vashist introduced the concept of Learning Using Privileged Information(LUPI). This new paradigm takes into account the elements of human teaching during the process of machine learning. However, with the utilization of privileged information, the extended L-2 SVM model given by Vapnik and Vashist doubles the number of parameters used in the standard L-2 SVM. Lots of computing time would be spent on tuning parameters. In order to reduce this workload, we proposed using L-1 SVM instead of L-2 SVM for LUPI in our previous work. Different from LUPI with L-2 SVM, which is formulated as quadratic programming, LUPI with L-1 SVM is essentially a linear programming and is computationally much cheaper. On this basis, we discuss how to employ the wisdom from teachers better and more flexible by LUPI with L-1 SVM in this paper. By introducing kernels, an extended L-1 SVM model, which is still a linear programming, is proposed. With the help of nonlinear kernels, the new model allows the privileged information be explored in a transformed feature space instead of the original data domain. Numerical experiment is carried out on both time series prediction and digit recognition problems. Experimental results also validate the effectiveness of our new method.