Skip to Main Content
In spite of important efforts in content-based indexing and retrieval during these last years, seeking relevant and accurate images remains a very difficult query. In the state-of-the-art approaches, the retrieval task may be efficient for some queries in which the semantic content of the query can be easily translated into visual features. For example, finding images of fires is simple because fires are characterized by specific colors (yellow and red). However, it is not efficient in other application fields in which the semantic content of the query is not easily translated into visual features. For example, finding images of birds during migrations is not easy because the system has to understand the query semantic. In the query, the basic visual features may be useful (a bird is characterized by a texture and a color), but they are not sufficient. What is missing is the generalization capability. Birds during migrations belong to the same repository of birds, so they share common associations among basic features (e.g., textures and colors) that the user cannot specify explicitly. We present an approach that discovers hidden associations among features during image indexing. These associations discriminate image repositories. The best associations are selected on the basis of measures of confidence. To reduce the combinatory explosion of associations, because images of the database contain very large numbers of colors and textures, we consider a visual dictionary that group together similar colors and textures.