Hierarchical mixtures of experts and the EM algorithm
Neural Computation
Asymptotic Convergence Rate of the EM Algorithm for Gaussian Mixtures
Neural Computation
On convergence properties of the em algorithm for gaussian mixtures
Neural Computation
A Semi-supervised Learning Algorithm on Gaussian Mixture with Automatic Model Selection
Neural Processing Letters
A BYY Split-and-Merge EM Algorithm for Gaussian Mixture Learning
ISNN '08 Proceedings of the 5th international symposium on Neural Networks: Advances in Neural Networks
Singularity and Slow Convergence of the EM algorithm for Gaussian Mixtures
Neural Processing Letters
A Single Loop EM Algorithm for the Mixture of Experts Architecture
ISNN 2009 Proceedings of the 6th International Symposium on Neural Networks: Advances in Neural Networks - Part II
On the correct convergence of the EM algorithm for Gaussian mixtures
Pattern Recognition
Energy based competitive learning
Neurocomputing
Asymptotic convergence properties of the em algorithm for mixture of experts
Neural Computation
Hi-index | 0.01 |
The EM algorithm is generally considered as a linearly convergent algorithm. However, many empirical results show that it can converge significantly faster than those gradient based first-order iterative algorithms, especially when the overlap of densities in a mixture is small. This paper explores this issue theoretically on mixtures of densities from a class of exponential families. We have proved that as an average overlap measure of densities in the mixture tends to zero, the asymptotic convergence rate of the EM algorithm locally around the true solution is a higher order infinitesimal than a positive order power of this overlap measure. Thus, the large sample local convergence rate for the EM algorithm tends to be asymptotically superlinear when the overlap of densities in the mixture tends to zero. Moreover, this result has been detailed on Gaussian mixtures.