Skip to Main Content
We propose a simplified depth-from-motion vision model based on leaky integrate-and-fire (LIF) neurons for edge detection and two-dimensional depth recovery. In the model, every LIF neuron is able to detect the irradiance edges passing through its receptive field in an optical flow field, and respond to the detection by firing a spike when the neuron's firing criterion is satisfied. If a neuron fires a spike, the time-of-travel of the spike-associated edge is transferred as the prediction information to the next synapse-linked neuron to determine its state. Correlations between input spikes and their timing thus encode depth in the visual field. The adaptation of synapses mediated by spike-timing-dependent plasticity is used to improve the algorithm's robustness against inaccuracy caused by spurious edge propagation. The algorithm is characterized on both artificial and real image sequences. The implementation of the algorithm in analog very large scale integrated (aVLSI) circuitry is also discussed.