By Topic

A Depth-Aware Character Generator for 3DTV

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Juhyun Oh ; Technical Research Institute, Korean Broadcasting System, Seoul, Korea ; Kwanghoon Sohn

In 3DTV, it is known that video captions and graphics should be inserted at proper scene depth positions, to prevent possible viewing discomfort. We propose a character generator that automatically analyzes the scene disparities and determines the proper depth of the graphic object to be inserted. The challenge is that the disparity range estimation from feature correspondences is severely affected even by a few outliers, whereas naıuml;ve SURF or BRIEF matching produces a considerable amount of outliers. We propose a multiple-hypothesis feature matching algorithm that considers the disparity coherence between adjacent features, with which most mismatches can be removed according to the reliability aggregated from the neighboring features. To estimate the accurate disparity range from the feature correspondences, a disparity histogram is computed and filtered by a space-time kernel to suppress the effect of incorrect disparities. We also propose the disparity-depth conversion in the asymmetric view frustum which is used for the stereoscopic rendering of graphics. Experimental results show that a 3D graphic object is successfully inserted at the desired depth which is obtained from the proposed disparity range estimation and disparity-depth conversion.

Published in:

IEEE Transactions on Broadcasting  (Volume:58 ,  Issue: 4 )