By Topic

An Interactive Image Retrieval Framework for Biomedical Articles Based on Visual Region-of- Interest (ROI) Identification and Classification

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

6 Author(s)
Rahman, M.M. ; Lister Hill Nat. Center for Biomed. Commun., Nat. Libr. of Med., Bethesda, MD, USA ; Daekeun You ; Simpson, M.S. ; Antani, S.K.
more authors

This paper presents an interactive biomedical image retrieval system based on automatic visual region-of-interest (ROI) extraction and classification into visual concepts. In biomedical articles, authors often use annotation markers such as arrows, letters or symbols overlaid on figures and illustrations in the articles to highlight ROIs. These annotations are then referenced and correlated with concepts in the caption text or figure citations in the article text. This association creates a bridge between the visual characteristics of important regions within an image and their semantic interpretation. Our proposed method at first localizes and recognizes the annotations by utilizing a combination of rule-based and statistical image processing techniques. Identifying these assists in extracting ROIs that are likely to be highly relevant to the discussion in the article text. The image regions are then annotated for classification using biomedical concepts obtained from a glossary of imaging terms. Similar automatic ROI extraction can be applied to query images, or user may interactively mark an ROI. As a result of our method, visual characteristics of the ROIs can be mapped to text concepts and then used to search image captions. In addition, the system can toggle the search process from purely visual to a textual one (cross-modal) or integrate both visual and textual search in a single process (multi-modal) based on utilizing user feedback. The hypothesis, that such approaches would improve biomedical image retrieval, is validated through experiments on a biomedical article dataset of thoracic CT scans from the collection of ImageCLEF'2010 medical retrieval track.

Published in:

Healthcare Informatics, Imaging and Systems Biology (HISB), 2012 IEEE Second International Conference on

Date of Conference:

27-28 Sept. 2012