Skip to Main Content
Matching pedestrians across disjoint camera views is a challenging task, since their observations are separated in time and space and their appearances may vary considerably. Recently, some approaches of matching pedestrians have been proposed. However, these approaches either used too complex representations or only considered the color information and discarded the spatial structural information of the pedestrian. In order to describe the spatial structural information in color space, we propose a distance-based local binary pattern (DLBP) descriptor. Besides the spatial structural information, the color itself namely its intensity value is also an important feature in matching pedestrians across disjoint camera views. In order to effectively combine these two kinds of information, we further propose a novel CI_DLBP descriptor, which unifies the color intensity and DLBP by learning the joint distributions (2-D histograms) of the DLBP and color intensity at each channel. In addition, different from the previous approaches in which the pedestrians matching is based on their whole bodies, we develop a part-based pedestrian representation because the color density and spatial structural information between the upper outer garment and the lower garment worn by the pedestrian is usually different. Experimental results on challenging realistic scenarios and VIPeR dataset validate the proposed DLBP operator, the CI_DLBP descriptor, and the part-based pedestrian representation for pedestrian matching across disjoint camera views. Compared with existing methods based on color information, this new CI_DLBP approach performs better.