Skip to Main Content
We propose an algorithm for estimating dense depth information of dynamic scenes from multiple video streams captured using unsynchronized stationary cameras. We solve this problem by first imposing two assumptions about the scene motion and the temporal offset between cameras. The motion of a scene is described using a local constant velocity model and the camera temporal offset is assumed to be constant within a short of period of time. Based on these models, geometric relations between the images of moving scene points, the scene depth, the scene motions, and the camera temporal offset are investigated and an estimation method is developed to compute the camera temporal offset. The three main steps of the proposed algorithm are: 1) the estimation of the temporal offset between cameras, 2) the synthesis of synchronized image pairs based on the estimated camera temporal offset and optical flow fields computed in each view, and 3) the stereo computation based on the synthesized synchronous image pairs. The proposed algorithm has been tested on both synthetic data and real image sequences. Promising quantitative and qualitative experimental results are demonstrated in the paper.