Object tracking based on the combination of learning and cascade particle filter | IEEE Conference Publication | IEEE Xplore

Object tracking based on the combination of learning and cascade particle filter


Abstract:

The problem of object tracking in dense clutter is a challenge in computer vision. This paper proposes a method for tracking object robustly by combining the online selec...Show More

Abstract:

The problem of object tracking in dense clutter is a challenge in computer vision. This paper proposes a method for tracking object robustly by combining the online selection of discriminative color features and the offline selection of discriminative Haar features. Furthermore, the cascade particle filter which has four stages of importance sampling is used to fuse two kinds of features efficiently. When the illumination changes dramatically, the Haar features selected offline play a major role. When the object is occluded, or its rotation angle is very large, the color features selected online play a major role. The experimental results show that the proposed method performs well under the conditions of illumination change, occlusion, object scale change and abrupt motion of object or camera.
Date of Conference: 11-14 October 2009
Date Added to IEEE Xplore: 04 December 2009
ISBN Information:
Print ISSN: 1062-922X
Conference Location: San Antonio, TX, USA
Department of Computer Science, Xiamen University, Xiamen, China
Department of Computer Science, Xiamen University, Xiamen, China
Department of Computer Science, Xiamen University, Xiamen, China
Department of Computer Science, Xiamen University, Xiamen, China

I. Introduction

Many powerful algorithms have been proposed for object tracking. They can be classified into four classes: Region-based methods[1] [2] [3], Feature-based methods[4] [5] [6] [7], Deform-able-template-based methods[8] [9][10] [11] and Model-based methods[12] [13] [14]. However, most of these traditional tracking approaches depend on some expensive assumptions. For example, they assume that motion and appearance are continuous and that the representative fixed features can always distinguish the interested objects from background well. The representative fixed features are selected before the tracking task starts. Unfortunately, motion and appearance continuity is often not satisfied because of an abrupt motion of the object or the camera. An object detector trained offline can be integrated into the tracker to solve this problem. Moreover, the fixed features cannot always distinguish the objects from the background well. There are two reasons for this: the object appearance will change when the illumination changes, occlusion happens or viewpoint varies; and the background will change as the target object moves from place to place. The remedy for the drawback of fixed features is using online selection of discriminative features for object tracking. For example, Collins et al[15] proposed a method in which a feature evaluation mechanism is embedded in a mean-shift tracking system that adaptively selects the top-ranked discriminative features for tracking. Jianyu Wang et al[16] online selected discriminative features from a set of Haar features into the appearance model for tracking. However, the problem of an online learning system is obvious: setting the online features updating ratio may be very difficult because the over updating may even ruin the original model. It can be solved by combining offline learning and online learning. For example, Yuan Li et al[17] proposed a cascade particle filter with discriminative observers of different Lifespans. The features selected online can represent object appearance more specifically, while the features selected offline can produce more accurate result.

Department of Computer Science, Xiamen University, Xiamen, China
Department of Computer Science, Xiamen University, Xiamen, China
Department of Computer Science, Xiamen University, Xiamen, China
Department of Computer Science, Xiamen University, Xiamen, China

Contact IEEE to Subscribe

References

References is not available for this document.