Skip to Main Content
This paper proposes a distributed multi-camera tracking algorithm with interacting particle filters. A robust multi-view appearance model is obtained by sharing training samples between views. Motivated by incremental learning and , we create an intermediate data representation between two camera views with generative subspaces as points on a Grassmann manifold, and sample along the geodesic between training data from two views to uncover the meaningful description due to viewpoint changes. Finally, a Boosted appearance model is trained using the projected training samples on to these generative subspaces. For each object, a set of two particle filters i.e., local and global is used. The local particle filter models the object motion in the image plane. The global particle filter models the object motion in the ground plane. These particle filters are integrated into a unified Interacting Markov Chain Monte Carlo (IMCMC) framework. We show the manner in which we induce priors on scene specific information into the global particle filter to improve tracking accuracy. The proposed algorithm is validated with extensive experimentation in challenging camera network data, and compares favorably with state of the art object trackers.