Skip to Main Content
Detection and tracking of moving obstacles is central to collision free navigation of autonomous rovers in dynamic environments. Template matching based methods for obstacle tracking have been proposed in literature. These methods have limitations in tracking dynamic obstacles owing to scale and rotation variations. These variations arise due to relative velocity between the rover and the obstacle to be tracked. Due to the relative velocity, the correlation between the template of the obstacle and the region in the image corresponding to the obstacle in successive frames degrades. In this paper we present three schemes targeted at improving robustness of template matching based tracking technique using monocular vision. The algorithms presented account for the geometric constraints posed on an image captured from a fixed camera mounted on a mobile platform. We present experimental results comparing the performance of our technique with the existing template matching based techniques. Also, we present two real-time applications in the form of object following behavior and obstacle avoidance behavior to demonstrate its efficacy and computational feasibility.