Skip to Main Content
This paper presents an efficient and scalable coding scheme for transmitting a stream of 3D models extracted from a video of a static scene. As in classical model-based video coding, the geometry, connectivity, and texture of the 3D models have to be transmitted, as well as the camera position for each frame in the original video. The proposed method is based on exploiting the interrelations existing between each type of information, instead of coding them independently, allowing a better prediction between the media and of the next information in the stream. Scalability is achieved through the use of wavelet-based representations for both texture and geometry of the models. A consistent connectivity is built for all 3D models extracted from the video sequence in order to get a consistent representation of the sequence possibly evolving in time. This allows a more compact representation and straightforward geometric morphing between successive time representations of the model. Furthermore this leads to consistent wavelet decomposition for 3D models in the stream. Targeted applications include distant visualization of the original video at very low bitrate and interactive navigation in the extracted 3D scene on heterogeneous terminals.