I. Introduction
Recently, multiview video (MVV) has drawn public attention, providing viewers a complete 3D perception by its mul-tiple-viewpoint feature. As display technologies evolve, various multiview video applications, such as 3DTV [1] and free viewpoint TV [2], are emerging. However, some challenges on MVV applications need to be conquered. First of all, an efficient multiview video coding (MVC) method is required to deal with drastically increasing amount of data. In addition, in order to support smooth and continuous viewpoint switching, virtual view synthesis is required to generate virtual view frames between different viewpoints and fill the non-captured area, since it is impossible to capture video sequences from all viewpoints with infinite real cameras. In July 2008, MVC is standardized as the Multiview High Profile in H.264/AVC by MPEG 3D Audio/Video (3DA V) Group. [3] After finishing the MVC standardization, MPEG-FTV group is working on virtual view synthesis. The view synthesis reference software (VSRS) is released by the MPEG-FTV group as the reference software and the research platform. [4] Many design challenges are waiting to be solved, since virtual view synthesis is a newly raised research area. These challenges are mainly from the occlusion handling in virtual view frames. In a multiview sequence, the occlusion area can be filled by reference frames from viewpoints neighboring the target virtual view-point. However, since frames in different views are captured by different cameras at different locations, color/illumination information also changes between different views. Many previous works pay attention to color/illumination compensation in multiview video. [5] [6] [7] However, most of them focus on color compensation before or within the encoding step, while there is a lack of discussion about color compensation in the virtual-view-synthesis step. In this paper, a hybrid color compensation scheme that targets for virtual view synthesis in multiview video applications is proposed. With the proposed inter-view color correspondence estimation, color mismatch caused by multiple reference views can be eliminated, and smoothly changing light-field environment can be generated. Furthermore, by detecting the region of reflection and raising a proper reflection model, cases for mirror-like materials can be distinguished from general cases and modified particularly. As a result, virtual view frames for 3D and multiview video applications can be synthesized with better perceptual quality. In the objective PSNR test, the proposed method also performs 0.26–0.42 dB gains.