By Topic

Adaptive motion-estimation-mode selection for depth video coding

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
B. Kamolrat ; Centre for Communication System Research, University of Surrey, Guildford, GU2 7XH, UK ; W. A. C. Fernando ; M. Mrak

An effective representation of 3D video in future 3D-TV systems consists of monoscopic video (colour component) and associated per-pixel depth information (depth component). As depth component indicates relative distance between objects within the scene and a camera, pixel values change not only when objects move in vertical and horizontal directions but also when they move in a depth direction. Instead of predicting motion of objects in two directions as appearing in traditional video codecs, three-dimensional block matching (3D-BM) achieves more accurate motion estimation in depth video coding. However overall performance of 3D-BM exceeds that of the traditional two-dimensional block matching (2D-BM) only at high bit rate. In this paper, an adaptive 2D-3D BM selection algorithm is introduced to compromise performance of 2DBM and 3D-BM. The Lagrangian optimisation algorithm is applied to select motion estimation mode at a block level. The experiment results reveal that the proposed adaptive motion-estimation-mode selection can improve the performance of 3D-BM at low bit rate while advantages of 3D-BM are preserved at high bit rate.

Published in:

2010 IEEE International Conference on Acoustics, Speech and Signal Processing

Date of Conference:

14-19 March 2010