Skip to Main Content
Accurate feature point tracks through long sequences are a valuable substrate for many computer vision applications, e.g. non-rigid body tracking, video segmentation, video matching, and even object recognition. Existing algorithms may be arranged along an axis indicating how global the motion model used to constrain tracks is. Local methods, such as the KLT tracker, depend on local models of feature appearance, and are easily distracted by occlusions, repeated structure, and image noise. This leads to short tracks, many of which are incorrect. Alone, these require considerable postprocessing to obtain a useful result. In restricted scenes, for example a rigid scene through which a camera is moving, such postprocessing can make use of global motion models to allow "guided matching " which yields long high-quality feature tracks. However, many scenes of interest contain multiple motions or significant non-rigid deformations which mean that guided matching cannot be applied. In this paper we propose a general amalgam of local and global models to improve tracking even in these difficult cases. By viewing rank-constrained tracking as a probabilistic model of 2D tracks rather than 3D motion, we obtain a strong, robust motion prior, derived from the global motion in the scene. The result is a simple and powerful prior whose strength is easily tuned, enabling its use in any existing tracking algorithm.
Date of Conference: 17-22 June 2007