Loading [MathJax]/extensions/MathMenu.js
Virtual View and Video Synthesis Without Camera Calibration or Depth Map | IEEE Conference Publication | IEEE Xplore

Virtual View and Video Synthesis Without Camera Calibration or Depth Map


Abstract:

This paper proposes a method for synthesizing virtual views based on two real views from different perspectives. This method is also readily applicable to video synthesis...Show More

Abstract:

This paper proposes a method for synthesizing virtual views based on two real views from different perspectives. This method is also readily applicable to video synthesis. This method can quickly generate new virtual views and a smooth video a view transition without camera calibration or depth map. In this method, we first extract the corresponding feature points from the real views by using the SFT algorithm. Secondly, we build a virtual multi-camera model. Then we calculate the coordinates of feature points in each virtual perspective, and project the real views onto this virtual perspective. Finally, the virtual views are synthesized. This method can be applied to most real scenes such as indoor and street scenes.
Date of Conference: 11-13 December 2020
Date Added to IEEE Xplore: 03 February 2021
ISBN Information:

ISSN Information:

Conference Location: Chongqing, China

Contact IEEE to Subscribe

References

References is not available for this document.