By Topic

Learning Dynamic Hybrid Markov Random Field for Image Labeling

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Quan Zhou ; Dept. of Electron. & Inf. Eng., Huazhong Univ. of Sci. & Technol., Wuhan, China ; Jun Zhu ; Wenyu Liu

Using shape information has gained increasing concerns in the task of image labeling. In this paper, we present a dynamic hybrid Markov random field (DHMRF), which explicitly captures middle-level object shape and low-level visual appearance (e.g., texture and color) for image labeling. Each node in DHMRF is described by either a deformable template or an appearance model as visual prototype. On the other hand, the edges encode two types of intersections: co-occurrence and spatial layered context, with respect to the labels and prototypes of connected nodes. To learn the DHMRF model, an iterative algorithm is designed to automatically select the most informative features and estimate model parameters. The algorithm achieves high computational efficiency since a branch-and-bound schema is introduced to estimate model parameters. Compared with previous methods, which usually employ implicit shape cues, our DHMRF model seamlessly integrates color, texture, and shape cues to inference labeling output, and thus produces more accurate and reliable results. Extensive experiments validate its superiority over other state-of-the-art methods in terms of recognition accuracy and implementation efficiency on: the MSRC 21-class dataset, and the lotus hill institute 15-class dataset.

Published in:

Image Processing, IEEE Transactions on  (Volume:22 ,  Issue: 6 )