By Topic

Error compensation and reliability based view synthesis

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

6 Author(s)
Wenxiu Sun ; Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hong Kong ; Oscar C. Au ; Lingfeng Xu ; Sung Him Chui
more authors

View synthesis offers a great flexibility in generating free viewpoint television (FTV) and 3D video (3DV). However, the depth-image-based view synthesis approach is very sensitive to errors in the camera parameters or poorly estimated depth maps (also called depth images). Because of these errors, three kinds of artifacts (blurring, contour, hole) are possibly introduced during the general synthesis process. Comparing to conventional methods which implement the view synthesis only in ideal case, in this paper, we propose to de sign an error compensation and reliability based view synthesis system where the potential errors are considered. The main contributions are highlighted as follows: Firstly, the camera parameter errors are compensated by a global homography transformation matrix. Secondly, the depth maps are classified into both reliable and unreliable regions and the reliability based weighting masks are built to blend synthesized images from two different views together. Finally, a reliability depth map based hole-filling technique is used to fill the existing holes. The experimental results demonstrate that these artifacts are efficiently reduced in the synthesized images.

Published in:

2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)

Date of Conference:

22-27 May 2011