Skip to Main Content
In this paper we present a method for modeling a complex scene from a small set of images and then synthesizing a new view. Consider one of input images as the reference image, a small set of reliable pixel matches are initially selected and then propagated to neighboring pixels based on the clustering-based light invariant photoconsistency constraint and the data-driven depth smoothness constraint, which are integrated into a pixel matching quality function to deal with occlusions, light changes and depth discontinuity. The mismatched points in propagation are then iteratively rectified. Thus, the 3D structures of the scene are expressed by a large set of densely matched 3D points. For a new view, we initialize it by projecting these matched points. Both match propagation and mismatch rectification are used to complete the whole rendering task.