By Topic

Brain-Inspired Framework for Fusion of Multiple Depth Cues

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

7 Author(s)
Chung-Te Li ; Grad. Inst. of Electron. Eng., Nat. Taiwan Univ., Taipei, Taiwan ; Yen-Chieh Lai ; Chien Wu ; Sung-Fang Tsai
more authors

2-D-to-3-D conversion is an important step for obtaining 3-D videos, as a variety of monocular depth cues have been explored to generate 3-D videos from 2-D videos. As in a human brain, a fusion of these monocular depth cues can regenerate 3-D data from 2-D data. By mimicking how our brains generate depth perception, we propose a reliability-based fusion of multiple depth cues for an automatic 2-D-to-3-D video conversion. A series of comparisons between the proposed framework and the previous methods is also presented. It shows that significant improvement is achieved in both subjective and objective experimental results. From the subjective viewpoint, the brain-inspired framework outperforms earlier conversion methods by preserving more reliable depth cues. Moreover, an enhancement of 0.70-3.14 dB and 0.0059-0.1517 in the perceptual quality of the videos is realized in terms of the objective-modified peak signal-to-noise ratio and disparity distortion model, respectively.

Published in:

Circuits and Systems for Video Technology, IEEE Transactions on  (Volume:23 ,  Issue: 7 )