By Topic

Utilizing Related Samples to Enhance Interactive Concept-Based Video Search

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

6 Author(s)
Jin Yuan ; Sch. of Comput., Nat. Univ. of Singapore, Singapore, Singapore ; Zheng-Jun Zha ; Yan-Tao Zheng ; Meng Wang
more authors

One of the main challenges in interactive concept-based video search is the problem of insufficient relevant samples, especially for queries with complex semantics. In this paper, “related samples” are exploited to enhance interactive video search. The related samples refer to those video segments that are relevant to part of the query rather than the entire query. Compared to the relevant samples which may be rare, the related samples are usually plentiful and easy to find in search results. Generally, the related samples are visually similar and temporally neighboring to the relevant samples. Based on these two characters, we develop a visual ranking model that simultaneously exploits the relevant, related, and irrelevant samples, as well as a temporal ranking model to leverage the temporal relationship between related and relevant samples. An adaptive fusion method is then proposed to optimally explore these two ranking models to generate search results. We conduct extensive experiments on two real-world video datasets: TRECVID 2008 and YouTube datasets. As the experimental results show, our approach achieves at least 96% and 167% performance improvements against the state-of-the-art approaches on the TRECVID 2008 and YouTube datasets, respectively.

Published in:

Multimedia, IEEE Transactions on  (Volume:13 ,  Issue: 6 )