Skip to Main Content
This work presents a unified framework for model-based and model-free reconstruction of people from multiple camera views in a studio environment. Shape and appearance of the reconstructed model are optimised simultaneously based on multiple view silhouette, stereo and feature correspondence. A priori knowledge of surface structure is introduced as regularisation constraints. Model-based reconstruction assumes a known generic lutmanoid model a priori, which is fitted to the multi-view observations to produce a structured representation for animation. Model-free reconstruction makes no priori assumptions on scene geometry allowing the reconstruction of complex dynamic scenes. Results are presented for reconstruction of sequences of people from multiple views. The model-based approach produces a consistent structured representation, which is robust in the presence of visual ambiguities. This overcomes limitations of existing visual-hull and stereo techniques. Model-free reconstruction allows high-quality novel view-synthesis with accurate reproduction of the detailed dynamics for hair and loose clothing. Multiple view optimisation achieves a visual quality comparable to the captured video without visual artefacts due to misalignment of images.