By Topic

Points of Interest and Visual Dictionaries for Automatic Retinal Lesion Detection

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

5 Author(s)
Rocha, A. ; Inst. of Comput., Univ. of Campinas (Unicamp), Campinas, Brazil ; Carvalho, T. ; Jelinek, H.F. ; Goldenstein, S.
more authors

In this paper, we present an algorithm to detect the presence of diabetic retinopathy (DR)-related lesions from fundus images based on a common analytical approach that is capable of identifying both red and bright lesions without requiring specific pre- or postprocessing. Our solution constructs a visual word dictionary representing points of interest (PoIs) located within regions marked by specialists that contain lesions associated with DR and classifies the fundus images based on the presence or absence of these PoIs as normal or DR-related pathology. The novelty of our approach is in locating DR lesions in the optic fundus images using visual words that combines feature information contained within the images in a framework easily extendible to different types of retinal lesions or pathologies and builds a specific projection space for each class of interest (e.g., white lesions such as exudates or normal regions) instead of a common dictionary for all classes. The visual words dictionary was applied to classifying bright and red lesions with classical cross validation and cross dataset validation to indicate the robustness of this approach. We obtained an area under the curve (AUC) of 95.3% for white lesion detection and an AUC of 93.3% for red lesion detection using fivefold cross validation and our own data consisting of 687 images of normal retinae, 245 images with bright lesions, 191 with red lesions, and 109 with signs of both bright and red lesions. For cross dataset analysis, the visual dictionary also achieves compelling results using our images as the training set and the RetiDB and Messidor images as test sets. In this case, the image classification resulted in an AUC of 88.1% when classifying the RetiDB dataset and in an AUC of 89.3% when classifying the Messidor dataset, both cases for bright lesion detection. The results indicate the potential for training with different acquisition images under different setup conditions with a high accuracy of - eferral based on the presence of either red or bright lesions or both. The robustness of the visual dictionary against image quality (blurring), resolution, and retinal background, makes it a strong candidate for DR screening of large, diverse communities with varying cameras and settings and levels of expertise for image capture.

Published in:

Biomedical Engineering, IEEE Transactions on  (Volume:59 ,  Issue: 8 )