Skip to Main Content
This paper describes a novel method to perform video based rendering. By capturing a set of real video sequences of a scene, the aim is to render a video sequence, in real time, from any viewpoint. By modelling the surfaces of a scene as a set of disjoint planar patches, we are able to efficiently estimate the parameters of the scene geometry. The patches can then be tracked over time using a multiresolution hierarchy. This time-varying surface model, and the images, are the input for the rendering algorithm, which uses a fuzzy z-buffer and projective texturing to generate reconstructions.