When presented with a water or an air gap barrier, animals often engage in peering, or side-to-sided head movements, before leaping across the barrier. This strategy is used instead of depth recovery using stereopsis, and likely gives a much better estimate of distance. In this article we present a neurocomputational model of peering, hosted on a small robot, that explains the essential characteristics of peering reported in the literature. The model builds on recent evidence for non-direction selective movement detectors in insects. Through non-linear transformation of the retinal image, the model produces a ‘leap’ command without intermediate reconstruction of the external space of the animal.