By Topic

An efficient approach to learning inhomogeneous Gibbs model

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)

The inhomogeneous Gibbs model (IGM) (Liu et al., 2001) is an effective maximum entropy model in characterizing complex high-dimensional distributions. However, its training process is so slow that the applicability of IGM has been greatly restricted. In this paper, we propose an approach for fast parameter learning of IGM. In IGM learning, features are incrementally constructed to constrain the learnt distribution. When a new feature is added, Markov-chain Monte Carlo (MCMC) sampling is repeated to draw samples for parameter learning. In contrast, our approach constructs a closed-form reference distribution using approximate information gain criteria. Because our reference distribution is very close to the optimal one, importance sampling can be used to accelerate the parameter optimization process. For problems with high-dimensional distributions, our approach typically achieves a speedup of two orders of magnitude compared to the original IGM. We further demonstrate the efficiency of our approach by learning a high-dimensional joint distribution of face images and their corresponding caricatures.

Published in:

Computer Vision and Pattern Recognition, 2003. Proceedings. 2003 IEEE Computer Society Conference on  (Volume:1 )

Date of Conference:

18-20 June 2003