Skip to Main Content
The increasing demand for live multimedia systems in gaming, art and entertainment industries, has resulted in the development of multiview capturing systems that use camera arrays. We investigate sparse (widely spaced) camera arrays to capture scenes of large volume space. A vital aspect of such systems is camera calibration, which provides an understanding of the scene geometry used for 3D reconstruction. Traditional algorithms make use of a calibration object or identifiable markers placed in the scene, but this is impractical and inconvenient for large spaces. Hence, we take the approach of features-based calibration. Existing schemes based on SIFT (Scale Invariant Feature Transform), exhibit lower accuracy than marker-based schemes due to false positives in feature matching, variations in baseline (spatial displacement between the camera pair) and changes in viewing angle. Therefore, we propose a new method of SIFT feature based calibration, which adopts a new technique for the detection and removal of wrong SIFT matches and the selection of an optimal subset of matches. Experimental tests show that our proposed algorithm achieves higher accuracy and faster execution for larger baselines of up to ≈2 meters, for an object distance of ≈4.6 meters, and thereby enhances the usability and scalability of multi-camera capturing systems for large spaces.