Skip to Main Content
Least Squares Support Vector Machines(LS-SVM), which is an efficient supervised learning tool, has been widely applied to real-time on-line data processing in many fields. However, the on-line training of LS-SVM always suffers from huge computation which greatly limits its practicability especially in embedded systems. By leveraging the flexibility and high degree parallelism offered by reconfigurable fabrics, we propose a Run-Time Reconfiguration(RTR) framework to accelerate the on-line training of LS-SVM. To realize maximum computational parallelism, we divide the training process into two parts, the kernel matrix formulation and the least-square problem solving. We dynamically load these two parts into FPGA with RTR under the control of the embedded PowerPC. In the kernel matrix formulation part, we design a piecewise linear interpolation method to realize the radial basis function. In the least-square problem solving part, the modified Cholesky Decomposition is introduced to avoid the latency caused by square roots operations. The whole design is tested on Virtex XC5VFX130T with a 150MHz clock. The experiments show appealing speed up which ranges from 6~218× over a Xeon CPU implementation on five different sized datasets. From time cost percentage analysis, our proposed architecture can be effectively applied to LS-SVM training in more than 1000 samples applications.