Skip to Main Content
An algorithm is presented to estimate the position of a hand-held camera with respect to a 3d world model constructed from range data and color imagery. Little prior knowledge is assumed about the camera position. The algorithm includes stages that (1) generate an ordered set of initial model-to-image mapping estimates, each accurate only in a small region of the image and of the model, (2) refinement of each initial estimate through a combination of 3d-to-2d matching, robust parameter estimation, region growth, and model selection, and (3) testing the resulting projections for accuracy, stability and randomness. A key issue during stage (2) is that initially the model-to-image mapping is well-approximated by a 2d-to-2d transformation based on a local model surface approximation, but eventually the algorithm must transition to the 3d-to-2d projection necessary to solve the position estimation problem. The algorithm accomplishes this by expanding the region along the approximation surface first and then making a transition to expand fully in 3d. The overall algorithm is shown to effectively determine the location of the camera over a 100 m x 100 m area of our campus.