By Topic

A generative sketch model for human hair analysis and synthesis

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Hong Chen ; Dept. of Stat. & Comput. Sci., California Univ., Los Angeles, CA, USA ; Song-Chun Zhu

In this paper, we present a generative sketch model for human hair analysis and synthesis. We treat hair images as 2D piecewise smooth vector (flow) fields and, thus, our representation is view-based in contrast to the physically-based 3D hair models in graphics. The generative model has three levels. The bottom level is the high-frequency band of the hair image. The middle level is a piecewise smooth vector field for the hair orientation, gradient strength, and growth directions. The top level is an attribute sketch graph for representing the discontinuities in the vector field. A sketch graph typically has a number of sketch curves which are divided into 11 types of directed primitives. Each primitive is a small window (say 5 times 7 pixels) where the orientations and growth directions are defined in parametric forms, for example, hair boundaries, occluding lines between hair strands, dividing lines on top of the hair, etc. In addition to the three level representation, we model the shading effects, i.e., the low-frequency band of the hair image, by a linear superposition of some Gaussian image bases and we encode the hair color by a color map. The inference algorithm is divided into two stages: 1) We compute the undirected orientation field and sketch graph from an input image and 2) we compute the hair growth direction for the sketch curves and the orientation field using a Swendsen-Wang cut algorithm. Both steps maximize a joint Bayesian posterior probability. The generative model provides a straightforward way for synthesizing realistic hair images and stylistic drawings (rendering) from a sketch graph and a few Gaussian bases. The latter can be either inferred from a real hair image or input (edited) manually using a simple sketching interface. We test our algorithm on a large data set of hair images with diverse hair styles. Analysis, synthesis, and rendering results are reported in the experiments

Published in:

IEEE Transactions on Pattern Analysis and Machine Intelligence  (Volume:28 ,  Issue: 7 )