Skip to Main Content
This paper studies source and correlation models for distributed video coding (DVC). It first considers a two-state HMM, i.e., a Gilbert-Elliott process, to model the bit-planes produced by DVC schemes. A statistical analysis shows that this model allows us to accurately capture the memory present in the video bit-planes. The achievable rate bounds are derived for these ergodic sources, first assuming an additive binary symmetric correlation channel between the two sources. These bounds show that a rate gain can be achieved by exploiting the sources memory with the additive BSC model. A Slepian-Wolf decoding algorithm which jointly estimates the sources and the source model parameters is then described. Simulation results show that the additive correlation model does not always fit well with the correlation between the actual video bit-planes. This has led us to consider a second correlation model (the predictive model). The rate bounds are then derived for the predictive correlation model in the case of memory sources, showing that exploiting the source memory does not bring any rate gain and that the noise statistic is a sufficient statistic for the MAP decoder. We also evaluate the rate loss when the correlation model assumed by the decoder is not matched to the true one. An a posteriori estimation of the correlation channel has hence been added to the decoder in order to use the most appropriate correlation model for each bit-plane. The new decoding algorithm has been integrated in a DVC decoder, leading to a rate saving of up to 10.14% for the same PSNR, with respect to the case where the bit-planes are assumed to be memoryless uniform sources correlated with the SI via an additive channel model.