By Topic

Exploiting collective knowledge in an image folksonomy for semantic-based near-duplicate video detection

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Hyun-seok Min ; Image & Video Syst. Lab., Korea Adv. Inst. of Sci. & Technol. (KAIST), Daejeon, South Korea ; De Neve, W. ; Yong Man Ro

An increasing number of duplicates and near-duplicates can be found on websites for video sharing. These duplicates and near-duplicates often infringe copyright or clutter search results. Consequently, a high need exists for techniques that allow identifying duplicates and near-duplicates. In this paper, we propose a semantic-based approach towards the task of identifying near-duplicates. Our approach makes use of semantic video signatures that are constructed by detecting semantic concepts along the temporal axis of video sequences. Specifically, we make use of an image folksonomy (i.e., a set of user-contributed images annotated with user-supplied tags) to detect semantic concepts in video sequences, making it possible to exploit an unrestricted concept vocabulary. Comparative experiments using the MUSCLE-VCD-2007 dataset and folksonomy images retrieved from Flickr show that our approach is successful in identifying near-duplicates.

Published in:

Image Processing (ICIP), 2010 17th IEEE International Conference on

Date of Conference:

26-29 Sept. 2010