By Topic

Video segmentation for content-based coding

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Meier, T. ; Dept. of Electr. & Electron. Eng., Western Australia Univ., Nedlands, WA, Australia ; Ngan, K.N.

To provide multimedia applications with new functionalities, the new video coding standard MPEG-4 relies on a content-based representation. This requires a prior decomposition of sequences into semantically meaningful, physical objects. We formulate this problem as one of separating foreground objects from the background based on motion information. For the object of interest, a 2D binary model is derived and tracked throughout the sequence. The model points consist of edge pixels detected by the Canny operator. To accommodate rotation and changes in shape of the tracked object, the model is updated every frame. These binary models then guide the actual video object plane (VOP) extraction. Thanks to our new boundary postprocessor and the excellent edge localization properties of the Canny operator, the resulting VOP contours are very accurate. Both the model initialization and update stages exploit motion information. The main assumption underlying our approach is the existence of a dominant global motion that can be assigned to the background. Areas that do not follow this background motion indicate the presence of independently moving physical objects. Two alternative methods to identify such objects are presented. The first one employs a morphological motion filter with a new filter criterion, which measures the deviation of the locally estimated optical flow from the corresponding global motion. The second method computes a change detection mask by taking the difference between consecutive frames. The first version is more suitable for sequences with little motion, whereas the second version is better at dealing with faster moving or changing objects. Experimental results demonstrate the performance of our algorithm

Published in:

Circuits and Systems for Video Technology, IEEE Transactions on  (Volume:9 ,  Issue: 8 )