By Topic

Region-Level Motion-Based Background Modeling and Subtraction Using MRFs

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Shih-Shinh Huang ; Dept. of Comput. Sci. & Inf. Eng., Nat. Taiwan Univ., Taipei ; Li-Chen Fu ; Pei-Yung Hsiao

This paper presents a new approach to automatic segmentation of foreground objects from an image sequence by integrating techniques of background subtraction and motion-based foreground segmentation. First, a region-based motion segmentation algorithm is proposed to obtain a set of motion-coherence regions and the correspondence among regions at different time instants. Next, we formulate the classification problem as a graph labeling over a region adjacency graph based on Markov random fields (MRFs) statistical framework. A background model representing the background scene is built and then is used to model a likelihood energy. Besides the background model, a temporal coherence is also maintained by modeling it as the prior energy. On the other hand, color distributions of two neighboring regions are taken into consideration to impose spatial coherence. Then, the a priori energy of MRFs takes both spatial and temporal coherence into account to maintain the continuity of our segmentation. Finally, a labeling is obtained by maximizing the a posteriori energy of the MRFs. Under such formulation, we integrate two different kinds of techniques in an elegant way to make the foreground detection more accurate. Experimental results for several video sequences are provided to demonstrate the effectiveness of the proposed approach

Published in:

IEEE Transactions on Image Processing  (Volume:16 ,  Issue: 5 )