Skip to Main Content
Human tracking is an essential step in many computer vision-based applications. As single view tracking may not be sufficiently robust and accurate, tracking based on multiple cameras has been widely considered in recent years. This paper presents a distributed human tracking method in a smart camera network and introduces a particle filter design based on Histogram of Oriented Gradients (HOG) and color histogram. The proposed adaptive motion model also estimates the target speed from the history of its latest displacement and improves the robustness of the tracker by decreasing the probability of missing targets. In addition, a distributed data fusion method is proposed which fuses the information from the cameras by an adaptive weighted average method. Each camera sends its own beliefs of the targets' states and the corresponding weights to other cameras in its communication range. The target fusion weights are determined by each camera, based on the certainty of the corresponding view for each target and an occlusion indicator which depends on the distance between detected targets. The performance of the proposed scheme is evaluated using the PETS2009 S2.L1 dataset. It is shown that the proposed data fusion method leads to more robust tracking among multiple cameras and improves handling of uncertainties and occlusions using multi-view information. In addition, the amount of data transferred in the network is significantly reduced in comparison with centralized methods.