A maximum entropy approach to natural language processing
Computational Linguistics
Inducing Features of Random Fields
IEEE Transactions on Pattern Analysis and Machine Intelligence
Learning in Gibbsian Fields: How Accurate and How Fast Can It Be?
IEEE Transactions on Pattern Analysis and Machine Intelligence
ECCV '98 Proceedings of the 5th European Conference on Computer Vision-Volume II - Volume II
Example-Based Caricature Generation with Exaggeration
PG '02 Proceedings of the 10th Pacific Conference on Computer Graphics and Applications
Minimax Entropy Principle and Its Application to Texture Modeling
Neural Computation
Computational studies of human motion: part 1, tracking and motion synthesis
Foundations and Trends® in Computer Graphics and Vision
Hi-index | 0.00 |
Inhomogeneous Gibbs model (IGM) [4] is an effective maximum entropy model in characterizing complex highdimensional distributions. However, its training process is so slow that the applicability of IGM has been greatly restricted. In this paper, we propose an approach for fast parameter learning of IGM. In IGM learning, features are incrementally constructed to constrain the learnt distribution. When a new feature is added, Markov-chain Monte Carlo (MCMC) sampling is repeated to draw samples for parameter learning. In contrast, our new approach constructs a closed-form reference distribution using approximate information gain criteria. Because our reference distribution is very close to the optimal one, importance sampling can be used to accelerate the parameter optimization process. For problems with high-dimensional distributions, our approach typically achieves a speedup of two orders of magnitude compared to the original IGM. We further demonstrate the efficiency of our approach by learning a high-dimensional joint distribution of face images and their corresponding caricatures.