Asymptotic Convergence Rate of the EM Algorithm for Gaussian Mixtures
Neural Computation
On convergence properties of the em algorithm for gaussian mixtures
Neural Computation
A BYY Split-and-Merge EM Algorithm for Gaussian Mixture Learning
ISNN '08 Proceedings of the 5th international symposium on Neural Networks: Advances in Neural Networks
A Single Loop EM Algorithm for the Mixture of Experts Architecture
ISNN 2009 Proceedings of the 6th International Symposium on Neural Networks: Advances in Neural Networks - Part II
Semi-supervised Bayesian ARTMAP
Applied Intelligence
Asymptotic convergence properties of the em algorithm for mixture of experts
Neural Computation
Hi-index | 0.01 |
It is well-known that the EM algorithm generally converges to a local maximum likelihood estimate. However, there have been many evidences to show that the EM algorithm can converge correctly to the true parameters as long as the overlap of Gaussians in the sample data is small enough. This paper studies this correct convergence problem asymptotically on the EM algorithm for Gaussian mixtures. It has been proved that the EM algorithm becomes a contraction mapping of the parameters within a neighborhood of the consistent solution of the maximum likelihood when the measure of average overlap among Gaussians in the original mixture is small enough and the number of samples is large enough. That is, if the initial parameters are set within the neighborhood, the EM algorithm will always converge to the consistent solution, i.e., the expected result. Moreover, the simulation results further demonstrate that this correct convergence neighborhood becomes larger as the average overlap becomes smaller.