An efficient approach to learning inhomogeneous gibbs model

  • Authors:
  • Ziqiang Liu;Hong Chen;Heung-Yeung Shum

  • Affiliations:
  • Microsoft Research Asia;Microsoft Research Asia;Microsoft Research Asia

  • Venue:
  • CVPR'03 Proceedings of the 2003 IEEE computer society conference on Computer vision and pattern recognition
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

Inhomogeneous Gibbs model (IGM) [4] is an effective maximum entropy model in characterizing complex highdimensional distributions. However, its training process is so slow that the applicability of IGM has been greatly restricted. In this paper, we propose an approach for fast parameter learning of IGM. In IGM learning, features are incrementally constructed to constrain the learnt distribution. When a new feature is added, Markov-chain Monte Carlo (MCMC) sampling is repeated to draw samples for parameter learning. In contrast, our new approach constructs a closed-form reference distribution using approximate information gain criteria. Because our reference distribution is very close to the optimal one, importance sampling can be used to accelerate the parameter optimization process. For problems with high-dimensional distributions, our approach typically achieves a speedup of two orders of magnitude compared to the original IGM. We further demonstrate the efficiency of our approach by learning a high-dimensional joint distribution of face images and their corresponding caricatures.