Skip to Main Content
In this paper, we first propose a multiple description mesh-based motion coding method, which generates two descriptions for mesh-based motion by first subsampling the nodes of a right-angled triangular mesh and then dividing them into two groups. Motion vectors associated with the mesh nodes in each group form the mesh-based motion field for each description, and are transmitted over distinct network channels. With the nodes in each group, two other regular triangular meshes besides the original one can be constructed, and three different prediction images can be reconstructed according to descriptions available. The proposed multiple description mesh-based motion coding method is then combined with the pairwise correlating transform proposed in Y. Wang et al. (1997), and a new and complete multiple description video coding scheme is proposed. Further measures are taken to reduce the mismatch between the encoder and the decoder that occurs when the decoder receives only one of the two descriptions and has to use a different reference frame for motion compensation other than the encoder does. Simulations were carried out to evaluate the performance of the proposed scheme, and the results show, compared to the MDTC video coding method in A. Reibman et al. (2002), the proposed scheme has achieved lower redundancy rate distortion; in the scenario of packet loss, the proposed scheme outperforms the MDTC video coding method.