Skip to Main Content
We investigate coding of multiple image sequences with video sensors. The video sensors are arranged in an array to monitor the same scene from different view points. Furthermore, the sensors are connected to a central decoder via a network. Note that the video sensors process highly view-correlated images. This correlation can be exploited if the video sensors operate in a collaborative fashion. On the contrary, temporal correlation among the images of each sequence can be exploited locally at each sensor. Collaborative coding of the multi-view videos can be achieved by distributed processing of the multi-view imagery. If the video sensor network utilizes a central decoder, the view-correlation can be exploited by centralized disparity compensation at the decoder. But before the decoder is able to apply disparity compensation efficiently, accurate disparity values have to be estimated at the central decoder. This paper discusses the impact of disparity fields at the central decoder and uses these estimates for centralized disparity compensation at the decoder to improve the efficiency of the video sensor network.