Empirical methods for artificial intelligence
Empirical methods for artificial intelligence
Neural networks for pattern recognition
Neural networks for pattern recognition
Bayesian classification (AutoClass): theory and results
Advances in knowledge discovery and data mining
Adaptive Probabilistic Networks with Hidden Variables
Machine Learning - Special issue on learning with probabilistic representations
Notes on methods based on maximum-likelihood estimation for learning the parameters of the mixture of Gaussians model
Convergence Results for the EM Approach to Mixtures of Experts Architectures
Convergence Results for the EM Approach to Mixtures of Experts Architectures
On convergence properties of the em algorithm for gaussian mixtures
Neural Computation
An experimental comparison of several clustering and initialization methods
UAI'98 Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence
Update rules for parameter estimation in Bayesian networks
UAI'97 Proceedings of the Thirteenth conference on Uncertainty in artificial intelligence
Parameter Learning in Object-Oriented Bayesian Networks
Annals of Mathematics and Artificial Intelligence
Triple Jump Acceleration for the EM Algorithm
ICDM '05 Proceedings of the Fifth IEEE International Conference on Data Mining
An Inductive Logic Programming Approach to Statistical Relational Learning
Proceedings of the 2005 conference on An Inductive Logic Programming Approach to Statistical Relational Learning
Multibandwidth kernel-based object tracking
Advances in Artificial Intelligence - Special issue on machine learning paradigms for modeling spatial and temporal information in multimedia data mining
Exploiting qualitative knowledge in the learning of conditional probabilities of Bayesian networks
UAI'00 Proceedings of the Sixteenth conference on Uncertainty in artificial intelligence
Acceleration of the EM algorithm: P-EM versus epsilon algorithm
Computational Statistics & Data Analysis
Hi-index | 0.00 |
Many applications require that we learn the parameters of a model from data. EM (Expectation-Maximization) is a method for learning the parameters of probabilistic models with missing or hidden data. There are instances in which this method is slow to converge. Therefore, several accelerations have been proposed to improve the method. None of the proposed acceleration methods are theoretically dominant and experimental comparisons are lacking. In this paper, we present the different proposed accelerations and compare them experimentally. From the results of the experiments, we argue that some acceleration of EM is always possible, but that which acceleration is superior depends on properties of the problem.