By Topic

Embedding Motion in Model-Based Stochastic Tracking

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Odobez, J. ; IDIAP Res. Inst. ; Gatica-Perez, D. ; Ba, S.O.

Particle filtering is now established as one of the most popular methods for visual tracking. Within this framework, there are two important considerations. The first one refers to the generic assumption that the observations are temporally independent given the sequence of object states. The second consideration, often made in the literature, uses the transition prior as the proposal distribution. Thus, the current observations are not taken into account, requiring the noise process of this prior to be large enough to handle abrupt trajectory changes. As a result, many particles are either wasted in low likelihood regions of the state space, resulting in low sampling efficiency, or more importantly, propagated to distractor regions of the image, resulting in tracking failures. In this paper, we propose to handle both considerations using motion. We first argue that, in general, observations are conditionally correlated, and propose a new model to account for this correlation, allowing for the natural introduction of implicit and/or explicit motion measurements in the likelihood term. Second, explicit motion measurements are used to drive the sampling process towards the most likely regions of the state space. Overall, the proposed model handles abrupt motion changes and filters out visual distractors, when tracking objects with generic models based on shape or color distribution. Results were obtained on head tracking experiments using several sequences with moving camera involving large dynamics. When compared against the Condensation Algorithm, they have demonstrated the superior tracking performance of our approach

Published in:

Image Processing, IEEE Transactions on  (Volume:15 ,  Issue: 11 )