Skip to Main Content
Stereo matching in unstructured, outdoor environments is often confounded by the complexity of the scenery and thus may yield only sparse disparity maps. Two-dimensional visual imagery, on the other hand, offers dense information about the environment of mobile robots, but is often difficult to exploit. Training a supervised classifier to identify traversable regions within images that generalizes well across a large variety of environments requires a vast corpus of labeled examples. Autonomous learning of the traversable/untraversable distinction indicated by scene appearance is therefore a highly desirable goal of robot vision. We describe here a system for learning this distinction online without the involvement of a human supervisor. The system takes in imagery and range data from a pair of stereo cameras mounted on a small mobile robot and autonomously learns to produce a labeling of scenery. Supervision of the learning process is entirely through information gathered from range data. Two types of boosted weak learners, Nearest Means and naive Bayes, are trained on this autonomously labeled corpus. The resulting classified images provide dense information about the environment which can be used to fill-in regions where stereo cannot find matches or in lieu of stereo to direct robot navigation. This method has been tested across a large array of environment types and can produce very accurate labelings of scene imagery as judged by human experts and compared against purely geometric-based labelings. Because it is online and rapid, it eliminates some of the problems related to color constancy and dynamic environments.
Systems, Man and Cybernetics, 2006. SMC '06. IEEE International Conference on (Volume:1 )
Date of Conference: 8-11 Oct. 2006