Skip to Main Content
Determining location of a target in a specific region is an important goal in some machine vision applications. The accuracy of the target localization is related to a number of parameters. Quantization process in the CCD of a camera node is one of the sources of error which results in achieving an estimation of the target location instead of its exact position. In this paper, we present a geometrical approach to analyze this error. The proposed approach models the field of view of each pixel as an oblique cone. Thus the ambiguity in localization, via two cameras with arbitrary configurations, is considered by intersection of two oblique cones. In this paper we utilize the difference between the maximum and minimum points of the cones intersection, in all three dimensions, as a criterion of error estimation. In order to determine the extremum points, the Lagrangain method is used. We show the validity of our model through simulations. Also, we analyze the effect of varying many parameters such as the baseline length, focal length, and pixel size, on the amount of the estimation error.