Skip to Main Content
The main issue of video copy detection is to estimate a constant spatial-temporal transformation in object level between the original video and the copies. In this paper, we propose a multi-level trajectory modeling approach for video copy detection. It includes a rich trajectory description and a robust trajectory-to-trajectory matching to preserve and explore the trajectory characteristics in both spatial-temporal space and feature space. In summary, we will describe the trajectories in three levels: feature-level descriptor, spatial-temporal coordinates and high-level dynamic behaviors. After extracting the trajectories of videos, we apply a two-stage trajectory-to-trajectory based parametric matching technique to achieve an optimal spatial-temporal transformation between query video and the database videos. To speed up the detection process, we use Locality Sensitive Hashing (LSH) to index and query trajectories with the dynamic behavior and features. Extensive experiments on 100 hours of videos from the TRECVID 2008 demonstrate the effectiveness of our approach.