By Topic

Accelerating SVMs by integrating GPUs into MapReduce clusters

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
Herrero-Lopez, S. ; Intell. Eng. Syst. Lab., Massachusetts Inst. of Technol., Cambridge, MA, USA

The uninterrupted growth of information repositories has progressively lead data-intensive applications, such as MapReduce-based systems, to the mainstream. The MapReduce paradigm has frequently proven to be a simple yet flexible and scalable technique to distribute algorithms across thousands of nodes and petabytes of information. Under these circumstances, classic data mining algorithms have been adapted to this model, in order to run in production environments. Unfortunately, the high latency nature of this architecture has relegated the applicability of these algorithms to batch-processing scenarios. In spite of this shortcoming, the emergence of massively threaded shared-memory multiprocessors, such as Graphics Processing Units (GPU), on the commodity computing market has enabled these algorithms to be executed orders of magnitude faster, while keeping the same MapReduce based model. In this paper, we propose the integration of massively threaded shared-memory multiprocessors into MapReduce-based clusters creating a unified heterogeneous architecture that enables executing Map and Reduce operators on thousands of threads across multiple GPU devices and nodes, while maintaining the built-in reliability of the baseline system. For this purpose, we created a programming model that facilitates the collaboration of multiple CPU cores and multiple GPU devices towards the resolution of a data intensive problem. In order to prove the potential of this hybrid system, we take a popular NP-Hard supervised learning algorithm, the Support Vector Machine (SVM) and show that a 36x - 192x speedup can be achieved on large datasets without changing the model or leaving the commodity hardware paradigm.

Published in:

Systems, Man, and Cybernetics (SMC), 2011 IEEE International Conference on

Date of Conference:

9-12 Oct. 2011