Skip to Main Content
Colour Stereo visions are the primary perception system of the most Unmanned Ground Vehicles (UGVs), which can provide not only 3D perception of the terrain but also its colour and texture information. The downside with present stereo vision technologies and processing algorithms is that they are limited by the cameras' field of view and maximum range, which causes the vehicles to get caught in cul-de-sacs. The philosophy underlying the proposed framework in this paper is to use the near-field stereo vision information associated with the terrain appearance to train a classifier to classify the far-field terrain well beyond the stereo range for each incoming image. We propose an online, self-supervised learning method to learn far-field terrain traversability with the ability to adapt to unknown environments without using hand-labelled training data. The method described in this paper enhances current near-to-far learning techniques by automating the task of selecting which learning strategy to be used from among several strategies based on the nature of the incoming real-time input training data. Promising results obtained using real datasets from the DARPA-LAGR program is presented and the performance is evaluated using hand-labelled ground truth.