Skip to Main Content
We propose a novel video-based rendering algorithm with a single moving camera. We reconstruct a dynamic 3D model of the scene with a feature point set that "evolves" over time. As the scene's appearance changes due to camera and object motions, some existing feature points dynamically disappear while some new feature points dynamically appear relative to the camera. The newly generated feature points' 3D positions and motions are initialized using nearby existing feature points' positions and motions. Our feature evolution, when incorporated into standard tracking and 3D reconstruction algorithms, provides for robust and dense 3D meshes, and their corresponding motions. Consequently, the evolution-based, time-dependent 3D meshes, motions, and textures render good-quality images at a virtual viewpoint and at a desired time instance. We also extend the proposed video-based rendering algorithm from using one single moving camera with one reconstructed depth map to using multiple moving cameras with multiple reconstructed depth maps to avoid occlusion and improve the rendering quality.