By Topic

A self-calibration technique for active vision systems

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
Sang De Ma ; Inst. of Autom., Acad. Sinica, Beijing, China

A manipulator wrist-mounted camera considerably facilitates motion stereo, object tracking, and active perception. An important issue in active vision is to determine the camera position and orientation relative to the camera platform (head-eye calibration or hand-eye calibration). We present a technique for calibrating the head-eye geometry and the camera intrinsic parameters. The technique allows camera self-calibration because it requires no reference object and directly uses the images of the environment. Camera self-calibration is important especially where the underlying visual tasks do not permit the use of reference objects. Our method exploits the flexibility of the active vision system, and bases camera calibration on a sequence of specially designed motion. It is shown that if the camera intrinsic parameters are known a priori, the orientation of the camera relative to the platform can be solved using 3 pure translational motions. If the intrinsic parameters are unknown, then two sequences of motion, each consisting of three orthogonal translations, are necessary to determine the camera orientation and intrinsic parameters. Once the camera orientation and intrinsic parameters are determined, the position of the camera relative to the platform can be computed from an arbitrary nontranslational motion of the platform. All the computations in our method are linear. Experimental results with real images are presented

Published in:

IEEE Transactions on Robotics and Automation  (Volume:12 ,  Issue: 1 )