By Topic

Improved view synthesis prediction using decoder-side motion derivation for multiview video coding

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)

This paper proposes a novel method that uses temporal reference pictures to improve the quality of view synthesis prediction. Existing view synthesis prediction schemes generate image signals from just only inter-view reference pictures. However, there are many types of signal mismatch like illumination, color, and focus mismatch across views, and these mismatches decrease the prediction performance. The proposed method synthesizes an initial view using the existing depth-based warping, and then uses the initial synthesized view as the templates needed to derive fine motion vectors. The initial synthesized view is then updated by using the derived motion vectors and temporal reference pictures which yields the prediction output. Experiments show that the proposed method can improve the quality of view synthesis about 14 dB for ballet and 4 dB for breakdancers at high bitrate, and reduces the bitrate by about 2% relative to conventional view synthesis prediction.

Published in:

3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON), 2010

Date of Conference:

7-9 June 2010