Abstract:
In this paper, we propose an efficient query-by-example method for large-scale video data. To implement this, we address the following three problems. The first one is th...Show MoreMetadata
Abstract:
In this paper, we propose an efficient query-by-example method for large-scale video data. To implement this, we address the following three problems. The first one is that large-scale video data includes many shots relevant to the same query. Since these shots contain significantly different features due to camera techniques and settings, they cannot be retrieved by a single model. Thus, we use “rough set theory” to extract multiple classification rules from example shots. That is, we aim to retrieve a variety of relevant shots where each rule is specialized to retrieve relevant shots containing certain features. The second problem is an expensive computation cost of the retrieval process on large-scale video data. To overcome this, we parallelize the process by using “MapReduce”, which is a parallel programming model for enabling efficient data distribution and aggregation. The final problem is that large-scale video data includes many shots which contain similar features to example shots, but are clearly irrelevant to the query. Consequently, the retrieval result includes several clearly irrelevant shots. To filter out them, we incorporate a “video ontology” as a knowledge base in our method. Experimental results on TRECVID 2009 video data validate the effectiveness of our method.
Date of Conference: 05-07 December 2010
Date Added to IEEE Xplore: 27 May 2011
ISBN Information: