Skip to Main Content
In this paper, we investigate the problem of the automatic creation of 3D models of man-made environments that we represent as collections of textured planes. A typical approach is to automatically compute a sparse feature reconstruction and to manually give their plane-memberships as well as the delineation of the planes. Textures are then extracted from the images while optimizing the model, typically the disparity between marked and predicted edges. We propose a means to automatically estimate the model of the scene, in terms of the number of planes and their parameters from a point feature reconstruction. The method is based on random sampling of reconstructed points to generate plane hypotheses. Each of these is then evaluated using a measure of approximate photoconsistency while recovering the corresponding plane delineation. We then compute the maximum likelihood estimate of all scene parameters, i.e. the set of planes and reconstructed points as well as relative camera pose, with respect to actual images. The approach is validated on simulated data and real images.