Skip to Main Content
Detecting visual changes in environments is an important computation with many applications in robotics and computer vision. Security cameras, remotely operated vehicles, and sentry robots could all benefit from robust change detection capability. We conjecture that if one has a mobile camera system the number of visual scenes that are experienced is limited (compared to the space of all possible scenes) and that the scenes do not frequently undergo major changes between observations. These assumptions can be exploited to ease the task of change detection and reduce the computational complexity of processing visual information by utilizing memory to store previous computations. We demonstrate a method to learn the distribution of visual features in an environment via a self-organizing map. Additionally, the spatial distribution of these features can be learned if a positional signal is available. Our method uses a low dimensional representation of visual features to rapidly detect changes in current visual inputs. The model encodes spatially-distributed color histograms of real world visual scenes captured by a camera moved through an environment. The distribution of the color histograms is learned using a self-organizing map with location (when available) and color data. We present tests of the model on detecting changes in an indoor environment.