Skip to Main Content
We address the problem of optimizing kernel parameters in support vector machine modeling, especially when the number of parameters is greater than one as in polynomial kernels and KMOD, our newly introduced kernel. The present work is an extended experimental study of the framework proposed by Chapelle et al. (2001) for optimizing SVM kernels using an analytic upper bound of the error. However our optimization scheme minimizes an empirical error estimate using a quasi-Newton optimization method. To assess our method, the approach is further used for adapting KMOD, RBF and polynomial kernels on synthetic data and NIST database. The method shows a much faster convergence with satisfactory results in comparison with the simple gradient descent method.