Skip to Main Content
A new algorithm is proposed for background subtraction in highly dynamic scenes. Background subtraction is equated to the dual problem of saliency detection: background points are those considered not salient by suitable comparison of object and background appearance and dynamics. Drawing inspiration from biological vision, saliency is defined locally, using center-surround computations that measure local feature contrast. A discriminant formulation is adopted, where the saliency of a location is the discriminant power of a set of features with respect to the binary classification problem which opposes center to surround. To account for both motion and appearance, and achieve robustness to highly dynamic backgrounds, these features are spatiotemporal patches, which are modeled as dynamic textures. The resulting background subtraction algorithm is fully unsupervised, requires no training stage to learn background parameters, and depends only on the relative disparity of motion between the center and surround regions. This makes it insensitive to camera motion. The algorithm is tested on challenging video sequences, and shown to outperform various state-of-the-art techniques for background subtraction.