Skip to Main Content
Using stereo disparity or depth information to detect and track moving objects is receiving increasing attention in recent years. However, this approach suffers from some difficulties, such as synchronisation between two cameras and doubling of the image-data size. Besides, traditional stereo-imaging systems have a limited field of view (FOV), which means that they need to rotate the cameras when an object moves out of view. In this research, the authors present a depth-space partitioning algorithm for performing object tracking using single-camera omni-stereo imaging system. The proposed method uses a catadioptric omni directional stereo-imaging system to capture omni-stereo image 'pairs.' This imaging system has 360° FOV, avoiding the need for rotating cameras when tracking a moving object. In order to estimate omni-stereo disparity, the authors present a depth-space partitioning strategy. It partitions three-dimensional depth space with a series of co-axial cylinders, models the disparity estimation as a pixel-labelling problem and establishes an energy minimisation function for solving this problem using graph cuts optimisation. Based on the omni-stereo disparity-estimation results, the authors detect and track-moving objects based on omni-stereo disparity motion vector, which is the difference between two consecutive disparity maps. Experiments on moving car tracking justify the proposed method.