Cart (Loading....) | Create Account
Close category search window
 

Geometrical Analysis of Localization Error in Stereo Vision Systems

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Fooladgar, F. ; Isfahan Univ. of Technol., Isfahan, Iran ; Samavi, S. ; Soroushmehr, S.M.R. ; Shirani, S.

Determining an object location in a specific region is an important task in many machine vision applications. Different parameters affect the accuracy of the localization process. The quantization process in charge-coupled device of a camera is one of the sources of error that causes estimation rather than identifying the exact position of the observed object. A cluster of points, in the field of view of a camera are mapped into a pixel. These points form an uncertainty region. In this paper, we present a geometrical model to analyze the volume of this uncertainty region as a criterion for object localization error. The proposed approach models the field of view of each pixel as an oblique cone. The uncertainty region is formed via the intersection of two cones, each emanating from one of the two cameras. Because of the complexity in modeling of two oblique cones' intersection, we propose three methods to simplify the problem. In the first two methods, only four lines are used. Each line goes through the camera's lens, modeled as a pinhole, and then passes one of the four vertices of a square that is fitted around the circular pixel. The first proposed method projects all points of these four lines into an image plane. In the second method, the line-cone intersection is used instead of intersection of two cones. Therefore, by applying line-cone intersection, the boundary points of the intersection of the two cones are determined. In the third approach, the extremum points of the intersection of two cones are determined by the Lagrangain method. The validity of our methods is verified through extensive simulations. In addition, we analyze effects of parameters, such as the baseline length, focal length, and pixel size, on the amount of the estimation error.

Published in:

Sensors Journal, IEEE  (Volume:13 ,  Issue: 11 )

Date of Publication:

Nov. 2013

Need Help?


IEEE Advancing Technology for Humanity About IEEE Xplore | Contact | Help | Terms of Use | Nondiscrimination Policy | Site Map | Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest professional association for the advancement of technology.
© Copyright 2014 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.