Skip to Main Content
This paper presents an interactive biomedical image retrieval system based on automatic visual region-of-interest (ROI) extraction and classification into visual concepts. In biomedical articles, authors often use annotation markers such as arrows, letters or symbols overlaid on figures and illustrations in the articles to highlight ROIs. These annotations are then referenced and correlated with concepts in the caption text or figure citations in the article text. This association creates a bridge between the visual characteristics of important regions within an image and their semantic interpretation. Our proposed method at first localizes and recognizes the annotations by utilizing a combination of rule-based and statistical image processing techniques. Identifying these assists in extracting ROIs that are likely to be highly relevant to the discussion in the article text. The image regions are then annotated for classification using biomedical concepts obtained from a glossary of imaging terms. Similar automatic ROI extraction can be applied to query images, or user may interactively mark an ROI. As a result of our method, visual characteristics of the ROIs can be mapped to text concepts and then used to search image captions. In addition, the system can toggle the search process from purely visual to a textual one (cross-modal) or integrate both visual and textual search in a single process (multi-modal) based on utilizing user feedback. The hypothesis, that such approaches would improve biomedical image retrieval, is validated through experiments on a biomedical article dataset of thoracic CT scans from the collection of ImageCLEF'2010 medical retrieval track.
Date of Conference: 27-28 Sept. 2012