Skip to Main Content
This paper provides a novel method for visual object tracking based on the combination of local scale-invariant feature transform (SIFT) description and global incremental principal component analysis (PCA) representation in loosely constrained conditions. The state of object is defined by the position and shape of a parallelogram, which means that tracking results are given by locating the object in every frame using parallelograms. The whole method is constructed in the framework of particle filter which includes two models: the dynamic model and the observation model. In the dynamic model, particle states are predicted with the help of local SIFT descriptors. Local key point matching between successive frames based on SIFT descriptors provides us an important cue for the prediction of particle states; thus, we can efficiently spread particles in the neighborhood of the predicted position. In the observation model, every particle is evaluated by local key point-weighted incremental PCA representation, which can describe the object more accurately by giving large weights to the pixels in the influence area of key points. Moreover, by incorporating the dynamic forgetting factor, we can update the PCA eigenvectors online according to the object states, which makes our method more adaptable under different situations. Experimental results show that compared to other state-of-the-art methods, the proposed method is robust especially under some difficult conditions, such as strong motion of both object and background, large pose change, and illumination change.
Circuits and Systems for Video Technology, IEEE Transactions on (Volume:21 , Issue: 4 )
Date of Publication: April 2011