Skip to Main Content
Foreground segmentation in videos by background subtraction methods are widely used in video surveillance applications. Adaptive single or mixture Gaussian models have been adopted for modeling nonstationary temporal distributions of background pixels. However, a challenge for this approach is that it is hard to choose a threshold to separate foreground from background accurately because of the so-called camouflage problem. This paper proposes a simple and effective scheme to alleviate the problem. It is achieved by averaging the frames in video sequences temporally, which reduces the variances of background models. Thus the background model is squeezed to a very narrow region and the probability of camouflage is reduced dramatically, which helps to improve the sensitivity and reliability. Significant improvements are shown on real video data. Incorporating this algorithm into a statistical framework for background subtraction leads to an improved foreground segmentation performance compared to a standard method.