Skip to Main Content
We describe a robotic vision system that aligns a camera's optical axis with its direction of translation by estimating the focus of expansion. Visual processing is based on functional models of populations of neurons in cortical areas VI through MST. Populations of motion energy neurons tuned to different orientations, positions and directions of motion are successively transformed into a population of neurons that collectively encode the focus of expansion at 25 frames per second. We characterize the performance of the system translating through a cluttered environment, and show that the performance is robust to variations in system parameters.