Skip to Main Content
In this paper, we demonstrate a 2D-to-3D video conversion system capable of real-time 1920×1080p conversion. The proposed system generates 3D depth information by fusing cues from edge feature-based global scene depth gradient and texture-based local depth refinement. By combining the global depth gradient and local depth refinement, generated 3D images have comfortable and vivid quality, and algorithm has very low computational complexity. Software is based on a system with a multi-core CPU and a GPU. To optimize performance, we use several techniques including unified streaming dataflow, multi-thread schedule synchronization, and GPU acceleration for depth image-based rendering (DIBR). With proposed method, real-time 1920×1080p 2Dto- 3D video conversion running at 30fps is then achieved.