Skip to Main Content
A scheme for extracting environment features and performing their interpretation from visual data for mobile robot navigation is presented. Each frame of the low rate image stream acquired by the robot is processed as a separate image. Segmentation of the image is done using a graph-based approach in order to select the regions of interest (ROIs) of the visual scene. ROIs are processed to extract the edges of the objects using relaxation labeling. The obtained image is analyzed using a machine learning approach based on embedded HMMs. Experimental results are presented for an office environment.