Skip to Main Content
We propose a method for delivering error-resilient video from wireless camera networks in a distributed fashion over lossy channels. Our scheme is based on distributed source coding that exploits inter-view correlation among cameras with overlapping views. The main focus in this work is on robustness which is imminently needed in a wireless setting. The proposed approach has low encoding complexity, is robust while satisfying tight latency constraints, and requires no inter-camera communication. Our system is built on and is a multi-camera extension of PRISM, an earlier proposed single-camera distributed video compression system. Decoder motion search, a key attribute of single-camera PRISM, is extended to the multi-view setting by using estimated scene depth information when it is available. In particular, dense stereo correspondence and view synthesis are utilized to generate side-information. When combined with decoder motion search, our proposed method can be made insensitive to small errors in camera calibration, disparity estimation and view synthesis. In experiments over a simulated wireless channel, the proposed approach achieves up to 2.1 dB gain in PSNR over a system using H.263+ with forward error correction.