Skip to Main Content
Semantic concept detection from multimedia features enables high-level access to multimedia content. While constructing robust detectors is feasible for concepts with sufficient training samples, concepts with fewer training samples are hard to train efficiently. Comparable performance may be possible if the dependence of these concepts on the ones that can be robustly modeled is exploited. In this paper we show this phenomenon using the TREC Video 2002 Corpus as a test bed. Using a basic set of 12 semantic concepts modeled with support vector machines, we predict presence of 4 other concepts. We then compare the performance of these predictors with direct SVM models for these 4 concepts and observe improvements of up to 150% in average precision.