Skip to Main Content
This paper presents a robust approach for 3D point reconstruction based on a set of images taken from a static scene with known, but not necessarily exact or regular, camera parameters. The points to be reconstructed are chosen from the contours of images, and a world-based formulation of the reconstruction problem and associated epipolar geometry is used. The result is a powerful mean of transparently integrating contributions from multiple images, and increased robustness to situations such as occlusions or apparent contours. Two steps for adding robustness are proposed: cross-checking, which validates a reconstructed point taken from an image by projecting it on a special subset of the remaining images; and merging, which fuses pairs of reconstructed points that are close in 3D space and that were initially chosen from different images. Results obtained with a synthetic scene (for ground truth comparison and error assessment), and two real scenes show the improved robustness achieved with the steps proposed.