Skip to Main Content
In this paper, we propose a novel method for spatial context modeling toward boosting visual discriminating power. We are particularly interested in how to model high-order local spatial contexts instead of the intensively studied second-order spatial contexts, i.e., co-occurrence relations. Motivated by the recent success of random forest in learning discriminative visual codebook, we present a spatialized random forest (SRF) approach, which can encode an unlimited length of high-order local spatial contexts. By spatially random neighbor selection and random histogram-bin partition during the tree construction, the SRF can explore much more complicated and informative local spatial patterns in a randomized manner. Owing to the discriminative capability test for the random partition in each tree node's split process, a set of informative high-order local spatial patterns are derived, and new images are then encoded by counting the occurrences of such discriminative local spatial patterns. Extensive comparison experiments on face recognition and object/scene classification clearly demonstrate the superiority of the proposed spatial context modeling method over other state-of-the-art approaches for this purpose.