By Topic

Combining DC Algorithms (DCAs) and Decomposition Techniques for the Training of Nonpositive–Semidefinite Kernels

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
Akoa, F.B. ; Dept. of Planning, AES Sonel, Douala

Today, decomposition methods are one of the most popular methods for training support vector machines (SVMs). With the use of kernels that do not satisfy Mercer's condition, new techniques must be designed to handle nonpositive-semidefinite kernels resulting to this choice. In this work we incorporate difference of convex (DC functions) optimization techniques into decomposition methods to tackle this difficulty. The new approach needs no problem modification and we show that the only use of a truncated DC algorithms (DCAs) in the decomposition scheme produces a sufficient decrease of the objective function at each iteration. Thanks to this property, an asymptotic convergence proof of the new algorithm is produced without any blockwise convexity assumption on the objective function. We also investigate a working set selection rule using second-order information for sequential minimal optimization (SMO)-type decomposition in the spirit of DC optimization. Numerical results show the robustness and the efficiency of the new methods compared with state-of-the-art software.

Published in:

Neural Networks, IEEE Transactions on  (Volume:19 ,  Issue: 11 )