Skip to Main Content
Multi-viewpoint video has recently gained significant attention in academic and commercial fields. In this work, we propose a new method for incorporating 3D point cloud models into multi-viewpoint video. First, we synthesize virtual multi-viewpoint video utilizing depth and texture maps of the input video. Then, we integrate 3D point cloud models with the resulting multi-viewpoint video generated in the first step by analyzing the depth information. As shown in our experiments, 3D point clouds can be seamlessly inserted into a multi-viewpoint video and realistic effect can be obtained. In addition, we compare the virtual viewpoint image generated by interpolating the two nearest neighbor cameras and by re-projecting the nearest camera.