Algorithms for clustering data
Algorithms for clustering data
Elements of information theory
Elements of information theory
Markov random field modeling in computer vision
Markov random field modeling in computer vision
Bayesian Ying-Yang machine, clustering and number of clusters
Pattern Recognition Letters - special issue on pattern recognition in practice V
Introduction to Monte Carlo methods
Learning in graphical models
Statistical Pattern Recognition: A Review
IEEE Transactions on Pattern Analysis and Machine Intelligence
Learning mixture models using a genetic version of the EM algorithm
Pattern Recognition Letters
Independent component analysis: algorithms and applications
Neural Networks
Unsupervised Learning of Finite Mixture Models
IEEE Transactions on Pattern Analysis and Machine Intelligence
Image Segmentation by Data-Driven Markov Chain Monte Carlo
IEEE Transactions on Pattern Analysis and Machine Intelligence
Stochastic Complexity in Statistical Inquiry Theory
Stochastic Complexity in Statistical Inquiry Theory
Machine Learning
A Greedy EM Algorithm for Gaussian Mixture Learning
Neural Processing Letters
Efficient greedy learning of Gaussian mixture models
Neural Computation
Estimation of entropy and mutual information
Neural Computation
Alignment by maximization of mutual information
ICCV '95 Proceedings of the Fifth International Conference on Computer Vision
Rényi Extrapolation of Shannon Entropy
Open Systems & Information Dynamics
Alignment by Maximization of Mutual Information
Alignment by Maximization of Mutual Information
Learning a multivariate Gaussian mixture model with the reversible jump MCMC algorithm
Statistics and Computing
Genetic-Based EM Algorithm for Learning Gaussian Mixture Models
IEEE Transactions on Pattern Analysis and Machine Intelligence
Multivariate mixtures of normals with unknown number of components
Statistics and Computing
EBEM: An Entropy-based EM Algorithm for Gaussian Mixture Models
ICPR '06 Proceedings of the 18th International Conference on Pattern Recognition - Volume 02
SMEM Algorithm for Mixture Models
Neural Computation
Variational learning for Gaussian mixture models
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
A kurtosis-based dynamic approach to Gaussian mixture modeling
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
Asymptotic theory of greedy approximations to minimal k-point random graphs
IEEE Transactions on Information Theory
Modeling the manifolds of images of handwritten digits
IEEE Transactions on Neural Networks
Unsupervised Learning of Gaussian Mixtures Based on Variational Component Splitting
IEEE Transactions on Neural Networks
Entropy-based variational scheme for fast bayes learning of Gaussian mixtures
SSPR&SPR'10 Proceedings of the 2010 joint IAPR international conference on Structural, syntactic, and statistical pattern recognition
From points to nodes: inverse graph embedding through a lagrangian formulation
CAIP'11 Proceedings of the 14th international conference on Computer analysis of images and patterns - Volume Part I
Information-theoretic selection of high-dimensional spectral features for structural recognition
Computer Vision and Image Understanding
Hi-index | 0.00 |
In this paper, we address the problem of estimating the parameters of Gaussian mixture models. Although the expectation-maximization (EM) algorithm yields the maximum-likelihood (ML) solution, its sensitivity to the selection of the starting parameters is well-known and it may converge to the boundary of the parameter space. Furthermore, the resulting mixture depends on the number of selected components, but the optimal number of kernels may be unknown beforehand. We introduce the use of the entropy of the probability density function (pdf) associated to each kernel to measure the quality of a given mixture model with a fixed number of kernels. We propose two methods to approximate the entropy of each kernel and a modification of the classical EM algorithm in order to find the optimum number of components of the mixture. Moreover, we use two stopping criteria: a novel global mixture entropy-based criterion called Gaussianity deficiency (GD) and a minimum description length (MDL) principle-based one. Our algorithm, called entropy-based EM (EBEM), starts with a unique kernel and performs only splitting by selecting the worst kernel attending to GD. We have successfully tested it in probability density estimation, pattern classification, and color image segmentation. Experimental results improve the ones of other state-of-the-art model order selection methods.