By Topic

Video-Based Rendering using Feature Point Evolution

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Zhang, W. ; Carnegie Mellon Univ., Pittsburgh, PA, USA ; Chen, T.

We propose a novel video-based rendering algorithm with a single moving camera. We reconstruct a dynamic 3D model of the scene with a feature point set that "evolves" over time. As the scene's appearance changes due to camera and object motions, some existing feature points dynamically disappear while some new feature points dynamically appear relative to the camera. The newly generated feature points' 3D positions and motions are initialized using nearby existing feature points' positions and motions. Our feature evolution, when incorporated into standard tracking and 3D reconstruction algorithms, provides for robust and dense 3D meshes, and their corresponding motions. Consequently, the evolution-based, time-dependent 3D meshes, motions, and textures render good-quality images at a virtual viewpoint and at a desired time instance. We also extend the proposed video-based rendering algorithm from using one single moving camera with one reconstructed depth map to using multiple moving cameras with multiple reconstructed depth maps to avoid occlusion and improve the rendering quality.

Published in:

Image Processing, 2006 IEEE International Conference on

Date of Conference:

8-11 Oct. 2006