Skip to Main Content
In multiview video representation, one of the most popular format is the so-called multiple view video plus depth. This representation is made up of N image sequences, each accompanied by a sequence of depth maps, telling the distance of each represented pixel from the observing camera. The depth maps are needed at the decoder side in order to generate intermediate views and therefore to enrich the user experience. This format is very flexible but also very demanding, in terms of storage space or and transmission bandwidth. Therefore, compression is needed. At this end, one of the key steps is an efficient representation of depth maps. In this work we build over a proposed method for multiple view video coding, based on dense disparity estimation between views. This allows us to obtain a compact and high-quality depthmap representation. In particular we explore the complex relationship between estimation and encoding parameters, showing that an optimal parameter set exist, that allows a fine-tuning of the estimation phase and an adaption of its results to the subsequent compression phase. Experiments are encouraging, showing remarkable gain over simple methods such as H.264/AVC simulcast, and even some gain with respect to more sophisticated techniques such as MVC.