By Topic

Visual Object Tracking Based on Combination of Local Description and Global Representation

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Li Sun ; Sch. of Electron. & Inf. Eng., Xi''an Jiaotong Univ., Xi''an, China ; Guizhong Liu

This paper provides a novel method for visual object tracking based on the combination of local scale-invariant feature transform (SIFT) description and global incremental principal component analysis (PCA) representation in loosely constrained conditions. The state of object is defined by the position and shape of a parallelogram, which means that tracking results are given by locating the object in every frame using parallelograms. The whole method is constructed in the framework of particle filter which includes two models: the dynamic model and the observation model. In the dynamic model, particle states are predicted with the help of local SIFT descriptors. Local key point matching between successive frames based on SIFT descriptors provides us an important cue for the prediction of particle states; thus, we can efficiently spread particles in the neighborhood of the predicted position. In the observation model, every particle is evaluated by local key point-weighted incremental PCA representation, which can describe the object more accurately by giving large weights to the pixels in the influence area of key points. Moreover, by incorporating the dynamic forgetting factor, we can update the PCA eigenvectors online according to the object states, which makes our method more adaptable under different situations. Experimental results show that compared to other state-of-the-art methods, the proposed method is robust especially under some difficult conditions, such as strong motion of both object and background, large pose change, and illumination change.

Published in:

Circuits and Systems for Video Technology, IEEE Transactions on  (Volume:21 ,  Issue: 4 )