High Precision Camera Calibration Method Based on Full Camera Model | IEEE Conference Publication | IEEE Xplore

High Precision Camera Calibration Method Based on Full Camera Model


Abstract:

In existing camera calibration methods, the principal point coordinates are commonly treated as the distortion center. However, in reality, the principal point coordinate...Show More

Abstract:

In existing camera calibration methods, the principal point coordinates are commonly treated as the distortion center. However, in reality, the principal point coordinates and the distortion center are not strictly identical. Failing to distinguish between them can lead to a reduction in the accuracy of camera parameter calibration. Based on this, a high-precision camera calibration method based on full camera model is proposed. This method comprises the following steps: Firstly, the initial camera parameters are obtained using the Zhang Zhengyou calibration method. Secondly, based on the camera's internal and external parameters, the centers of circles in the image and in the world coordinate system are projected onto the normalized plane to obtain the actual and ideal points of the circle's center on the normalized plane. The actual point on the normalized plane is then corrected using the full camera model. Next, the distortion center and distortion parameters are calculated from the positional relationship between the actual point and the ideal point on the normalized plane. Subsequently, the corrected actual points on the normalized plane are projected back onto the image plane, and camera calibration is performed again. Finally, high precision camera calibration is achieved after several iterations of projection and calibration. Experimental results demonstrate that this method significantly enhances the accuracy of camera calibration.
Date of Conference: 25-27 May 2024
Date Added to IEEE Xplore: 17 July 2024
ISBN Information:

ISSN Information:

Conference Location: Xi'an, China

I. Introduction

Camera calibration is a crucial procedure for determining accurate camera parameters. It involves establishing a strong relationship between 3D points in the world coordinate system and their corresponding points in the pixel coordinate system, this process determines both internal and external camera parameters [1]. Camera parameter accuracy directly affects the reliability of subsequent computational tasks. Internal parameters describe camera characteristics like focal length and principal point, while external parameters provide information about the camera's position and orientation. Precise determination of these parameters is crucial for accurate and robust computations in various applications. Therefore, camera calibration is crucial in computer vision. Numerous methods have been proposed, including traditional methods, active vision-based methods and camera self-calibration methods [2]. Conventional calibration methods establish a correspondence relationship between the world and pixel coordinate systems using a calibration object [3]. Active vision-based methods use camera motion information to calculate camera parameters [4]. Camera self-calibration methods use scene information to calculate camera parameters, but they may be less robust in the presence of noise or non-ideal conditions [5]. In the above camera calibration methods, radial and tangential distortions are inevitable, which will cause the imaging position to shift. If the camera is directly calibrated, it will introduce significant errors [6]. To reduce calibration errors, the camera distortion model is generally used to correct the actual imaging point, but the distortion model they use does not take into account the influence of the distortion center, or directly regards the principal point coordinates as the distortion center, but in fact the principal point coordinates are not strictly equal to the distortion center, so there are still errors.

Contact IEEE to Subscribe

References

References is not available for this document.