Skip to Main Content
We present a model-based approach to encode multiple synchronized video streams which show a dynamic scene from different viewpoints. By utilizing 3D scene geometry, we compensate for motion and disparity by transforming all video images to object textures prior to compression. A 4D SPIHT wavelet compression algorithm exploits interframe coherence in both temporal and spatial dimension. Unused texels increase the compression, and a shape mask can be omitted at the cost of higher decoder complexity. The proposed coding scheme is intended for use in conjunction with free-viewpoint video and 3D-TV applications.