Skip to Main Content
Modeling eye-movements during search is important for building intelligent robotic vision systems, and for understanding how humans select relevant information and structure behavior in real time. Previous models of visual search (VS) rely on the idea of ldquosaliency mapsrdquo which indicate likely locations for targets of interest. In these models the eyes move to locations with maximum saliency. This approach has several drawbacks: (1) It assumes that oculomotor control is a greedy process, i.e., every eye movement is planned as if no further eye movements would be possible after it. (2) It does not account for temporal dynamics and how information is integrated as over time. (3) It does not provide a formal basis to understand how optimal search should vary as a function of the operating characteristics of the visual system. To address these limitations, we reformulate the problem of VS as an Information-gathering Partially Observable Markov Decision Process (I-POMDP). We find that the optimal control law depends heavily on the Foveal-Peripheral Operating Characteristic (FPOC) of the visual system.