By Topic

A feature-based technique for joint, linear estimation of high-order image-to-mosaic transformations: application to mosaicing the curved human retina

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Can, A. ; Rensselaer Polytech. Inst., Troy, NY, USA ; Stewart, C.V. ; Roysam, B. ; Tanenbaum, H.L.

Methods are presented for increasing the coverage and accuracy of image mosaics constructed from multiple, uncalibrated, weak-perspective views of the human retina. Extending our previous algorithm for registering pairs of images using a non-invertible, 12-parameter, quadratic image transformation model and a hierarchical, robust estimation technique, two important innovations are presented. The first is a linear, non-iterative method for jointly estimating the transformations of all images onto the mosaic. This employs constraints derived from pairwise matching between the non-mosaic image frames. It allows the transformations to be estimated for images that do not overlap the mosaic anchor frame, and results in mutually consistent transformations for all images. This means the mosaics can cover a much broader area of the retinal surface, even though the transformation model is not closed under composition. This capability is particularly valuable for mosaicing the retinal periphery in the context of diseases such as AIDS/CMV. The second innovation is a method to improve the accuracy of the pairwise matches as well as the joint estimation by refining the feature locations and by adding new features based on the transformation estimates themselves. For matching image frames of size 1024×1024, this cuts the registration error from the range of 1 to 3 pixels to about 0.55 pixels. The overall transformation error in final mosaic construction is 0.80 pixels based on experiments over a large set of eyes

Published in:

Computer Vision and Pattern Recognition, 2000. Proceedings. IEEE Conference on  (Volume:2 )

Date of Conference:

2000