Cart (Loading....) | Create Account
Close category search window

Accelerating on-line training of LS-SVM with run-time reconfiguration

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Shaojun Wang ; Sch. of Electr. Eng. & Autom., Harbin Inst. of Technol., Harbin, China ; Yu Peng ; Guangquan Zhao ; Xiyuan Peng

Least Squares Support Vector Machines(LS-SVM), which is an efficient supervised learning tool, has been widely applied to real-time on-line data processing in many fields. However, the on-line training of LS-SVM always suffers from huge computation which greatly limits its practicability especially in embedded systems. By leveraging the flexibility and high degree parallelism offered by reconfigurable fabrics, we propose a Run-Time Reconfiguration(RTR) framework to accelerate the on-line training of LS-SVM. To realize maximum computational parallelism, we divide the training process into two parts, the kernel matrix formulation and the least-square problem solving. We dynamically load these two parts into FPGA with RTR under the control of the embedded PowerPC. In the kernel matrix formulation part, we design a piecewise linear interpolation method to realize the radial basis function. In the least-square problem solving part, the modified Cholesky Decomposition is introduced to avoid the latency caused by square roots operations. The whole design is tested on Virtex XC5VFX130T with a 150MHz clock. The experiments show appealing speed up which ranges from 6~218× over a Xeon CPU implementation on five different sized datasets. From time cost percentage analysis, our proposed architecture can be effectively applied to LS-SVM training in more than 1000 samples applications.

Published in:

Field-Programmable Technology (FPT), 2011 International Conference on

Date of Conference:

12-14 Dec. 2011

Need Help?

IEEE Advancing Technology for Humanity About IEEE Xplore | Contact | Help | Terms of Use | Nondiscrimination Policy | Site Map | Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest professional association for the advancement of technology.
© Copyright 2014 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.