By Topic

Multi-level trajectory modeling for video copy detection

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

6 Author(s)
Shi Chen ; Inst. of Autom., Chinese Acad. of Sci., Beijing, China ; Jinqiao Wang ; Yi Ouyang ; Bo Wang
more authors

The main issue of video copy detection is to estimate a constant spatial-temporal transformation in object level between the original video and the copies. In this paper, we propose a multi-level trajectory modeling approach for video copy detection. It includes a rich trajectory description and a robust trajectory-to-trajectory matching to preserve and explore the trajectory characteristics in both spatial-temporal space and feature space. In summary, we will describe the trajectories in three levels: feature-level descriptor, spatial-temporal coordinates and high-level dynamic behaviors. After extracting the trajectories of videos, we apply a two-stage trajectory-to-trajectory based parametric matching technique to achieve an optimal spatial-temporal transformation between query video and the database videos. To speed up the detection process, we use Locality Sensitive Hashing (LSH) to index and query trajectories with the dynamic behavior and features. Extensive experiments on 100 hours of videos from the TRECVID 2008 demonstrate the effectiveness of our approach.

Published in:

Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE International Conference on

Date of Conference:

14-19 March 2010