Skip to Main Content
Multi-view video systems provide 3D information about the captured scene. This 3D information can be useful for many emerging applications, e.g. 3D TV or virtual reality. However, many current video systems consist only of one camera and consequently do not capture the 3D content of a scene. In this paper, we therefore present an efficient, flexible and low-complexity method for extending an existing mono video system to a 3D system. The main idea is to develop a coding framework that starts from a single camera and that can be flexibly extended by low-complexity cameras to capture 3D video data. These cameras do not perform any motion or disparity estimation, but still good coding efficiency is achieved by relying on distributed video (DV) coding principles, i.e. jointly decoding of the independently encoded frames of the multi-view cameras. If we compare our coding results with the results for low-complexity DV coding of a single video, then higher efficiency is achieved since not only the motion between the frames of the video but also disparity between different views of the array of cameras is exploited at the decoder.