Skip to Main Content
This paper is aimed at evaluating the semantic information content of multiscale, low-level image segmentation. As a method of doing this, we use selected features of segmentation for semantic classification of real images. To estimate the relative measure of the information content of our features, we compare the results of classifications we obtain using them with those obtained by others using the commonly used patch/grid based features. To classify an image using segmentation based features, we model the image in terms of a probability density function, a Gaussian mixture model (GMM) to be specific, of its region features. This GMM is fit to the image by adapting a universal GMM which is estimated so it fits all images. Adaptation is done using a maximum-aposteriori criterion. We use kernelized versions of Bhattacharyya distance to measure the similarity between two GMMs and support vector machines to perform classification. We outperform previously reported results on a publicly available scene classification dataset. These results suggest further experimentation in evaluating the promise of low level segmentation in image classification.