Skip to Main Content
Hierarchical generative models and Bayesian belief propagation have been shown to provide a theoretical framework that can account for perceptual processes, including feedback modulation. The framework explains both psychophysical and physiological experimental data and maps well onto the hierar-chical distributed cortical anatomy. The complexity required to model cortical processes makes inference, even using approximate methods, very computationally expensive. Thus, existing models are typically limited to tree-structured networks with no loops, use small toy examples or fail to account for certain perceptual aspects such as invariance to transformations or feedback reconstruction. We propose a novel, rigorous methodology to 1) implement selectivity and invariance using belief propagation on Bayesian networks; 2) combine feedback information from multiple parents, significantly reducing the number of parameters and operations; and 3) deal with loops using loopy belief propagation and different sampling methods. To demonstrate these properties we implement a Bayesian network that reproduces the structure and approximates the operations of HMAX, a biologically inspired large-scale hierarchical model of object recognition. Hence, the proposed model not only achieves successful feed forward recognition invariant to position and size, but also extends the original model by including high-level feedback connections that reproduce modulatory effects such as illusory contour completion, attention and mental imagery. Overall, the proposed methodology, based on state-of-the art probabilistic approaches, can be used to build biologically plausible models of hierarchical perceptual organization that include top-down and bottom-up interactions. Furthermore, the proposed framework is suitable for large-scale parallel distributed hardware implementations.