Skip to Main Content
Automatic semantic concept detection in images is a promising tool for alleviating the user effort in annotating and cataloging digital media collections. It enables automatic identification of people, places and objects, for enhanced indexing and searching of home photographs, for example. While constructing robust semantic detectors has been shown feasible for global generic concepts with a sufficient number of good training examples (e.g., indoors, outdoors), many interesting concepts, such as face, people, occur at subpicture granularity, occupy only a portion of the image and therefore frequently have training examples with a reduced signal-to-noise ratio. Such regional concepts are harder to detect due to imperfections in automatic image segmentation algorithms leading to inaccurate object boundaries and low-level feature ambiguities. In this paper we focus on the problem of boosting detection performance of existing regional concept detectors by exploiting detection redundancy. Specifically, we propose to use the same detector multiple times to evaluate and combine multiple detection hypotheses for the same content-but at different content granularities-in order to reduce detection sensitivity to segmentation errors. We validate the approach using support vector machine classifiers for 14 regional semantic concepts from the NISTTRFCVID 2003 common annotation lexicon and show performance improvements of multigranular detection and fusion.