Skip to Main Content
In this letter, we address the problem of modeling scene background for moving object detection. Although the per-pixel model has been extensively exploited in the literature, the pixel-pair relationship is still a nontrivial problem to be represented efficiently. To deal with this issue, we propose a spatiotemporal smooth model based on conditional random field. Besides the mutual influence on labels, data dependencies are encoded into the spatiotemporal smooth model effectively by employing the contextual constraints in terms of both spatial coherence and temporal persistency. Accurate extraction of foreground from nonstationary scenes can be achieved by the proposed model. Experiments conducted on various sequences demonstrate the property of the proposed method.