Notification:
We are currently experiencing intermittent issues impacting performance. We apologize for the inconvenience.
By Topic

Constructing 3D natural scene from video sequences with vibrated motions

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Zhigang Zhu ; Dept. of Comput. Sci. & Technol., Tsinghua Univ., Beijing, China ; Guangyou Xu ; Xueyin Lin

This paper presents a systematic approach to automatically construct 3D natural scenes from video sequences. Dense layered depth maps are derived from image sequences captured by a vibrated camera with only approximately known motion. The approach consists of (1) image stabilization by motion filtering and (2) depth estimation by spatio-temporal texture analysis. The two stage method not only generalized the so called panoramic image method and epipolar plane image method to handle image sequence vibrations due to the uncontrollable camera fluctuations, but also bypasses the feature extraction and matching problems encountered in stereo or visual motion. Our approach allows automatic modeling of the real environment for inclusion in VR representations.

Published in:

Virtual Reality Annual International Symposium, 1998. Proceedings., IEEE 1998

Date of Conference:

18-18 1998