I. Introduction
Camera calibration is a crucial procedure for determining accurate camera parameters. It involves establishing a strong relationship between 3D points in the world coordinate system and their corresponding points in the pixel coordinate system, this process determines both internal and external camera parameters [1]. Camera parameter accuracy directly affects the reliability of subsequent computational tasks. Internal parameters describe camera characteristics like focal length and principal point, while external parameters provide information about the camera's position and orientation. Precise determination of these parameters is crucial for accurate and robust computations in various applications. Therefore, camera calibration is crucial in computer vision. Numerous methods have been proposed, including traditional methods, active vision-based methods and camera self-calibration methods [2]. Conventional calibration methods establish a correspondence relationship between the world and pixel coordinate systems using a calibration object [3]. Active vision-based methods use camera motion information to calculate camera parameters [4]. Camera self-calibration methods use scene information to calculate camera parameters, but they may be less robust in the presence of noise or non-ideal conditions [5]. In the above camera calibration methods, radial and tangential distortions are inevitable, which will cause the imaging position to shift. If the camera is directly calibrated, it will introduce significant errors [6]. To reduce calibration errors, the camera distortion model is generally used to correct the actual imaging point, but the distortion model they use does not take into account the influence of the distortion center, or directly regards the principal point coordinates as the distortion center, but in fact the principal point coordinates are not strictly equal to the distortion center, so there are still errors.