Scheduled System Maintenance:
On Monday, April 27th, IEEE Xplore will undergo scheduled maintenance from 1:00 PM - 3:00 PM ET (17:00 - 19:00 UTC). No interruption in service is anticipated.
By Topic

Parallel algorithms to a parallel hardware: Designing vision algorithms for a GPU

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)

A GPU becomes an affordable solution for accelerating a slow process on a commercial system. The most of achievements using it for non-rendering problems are the exact re-implementation of existing algorithms designed for a serial CPU. We study about conditions of a good parallel algorithm, and show that it is possible to design an algorithm targeted to a parallel hardware, though it may be useless on a CPU. The optical flow estimation problem is investigated to show the possibility. In some time-critical applications, it is more important to get results in a limited time than to improve the results. We focus on designing optical flow approximation algorithms tailored for a GPU to get a reasonable result as fast as possible by reformulating the problem as change detection with hypothesis generation using features tracked in advance. Two parallel algorithms are proposed: direct interpolation and testing multiple hypotheses. We discuss implementation issues in the CUDA framework. Both methods are running on a GPU in a near video rate providing reasonable results for the time-critical applications. These GPU-tailored algorithms become useful by running about 240 times faster than the equivalent serial implementations which are too slow to be useful in practice.

Published in:

Computer Vision Workshops (ICCV Workshops), 2009 IEEE 12th International Conference on

Date of Conference:

Sept. 27 2009-Oct. 4 2009