Skip to Main Content
We present a novel 3D model-based distributed video coding algorithm. It is based on independent, model-based tracking of multiple sources and distributed coding of the tracked feature points. The model-based tracking scheme provides correspondence between the overlapping set of features that are visible in the different views. While the motion estimates obtained from the tracking algorithm remove temporal redundancy and the 3D model accounts for removing spatial redundancy, distributed coding is used to eliminate inter-sensor redundancy. Thus, in contrast to most of the current video compression standards which only exploit spatial and temporal redundancy within each video sequence, we also consider the significant redundancy between the sequences. Results demonstrate that our algorithm yields a significant saving in bit rate on the overlapping portion of multiple views.