Skip to Main Content
Voluminous medical images are generated daily. They are critical assets for medical diagnosis, research, and teaching. To facilitate automatic indexing and retrieval of large medical-image databases, both images and associated texts are indexed using medical concepts from the Unified Medical Language System (UMLS) meta-thesaurus. We propose a structured learning framework based on support vector machines to facilitate modular design and learning of medical semantics from images. We present two complementary visual indexing approaches within this framework: a global indexing to access image modality and a local indexing to access semantic local features. Two fusion approaches are developed to improve textual retrieval using the UMLS-based image indexing. First, a simple fusion of the textual and visual retrieval approaches is proposed, improving significantly the retrieval results of both text and image retrieval. Second, a visual modality filtering is designed to remove visually aberrant images according to the query modality concept(s). Using the ImageCLEFmed database, we demonstrate the effectiveness of our framework which is superior when compared with the automatic runs evaluated in 2005 on the same medical-image retrieval task.