Hierarchical mixtures of experts and the EM algorithm
Neural Computation
Parameter estimation for a mixture of linear regressions (em algorithm, asymptotic efficiency)
Parameter estimation for a mixture of linear regressions (em algorithm, asymptotic efficiency)
On convergence properties of the em algorithm for gaussian mixtures
Neural Computation
On Convergence of an Iterative Factor Estimate Algorithm for the NFA Model
ICANN '02 Proceedings of the International Conference on Artificial Neural Networks
A Comparative Study on Three MAP Factor Estimate Approaches for NFA
IDEAL '02 Proceedings of the Third International Conference on Intelligent Data Engineering and Automated Learning
A theoretical framework for multiple neural network systems
Neurocomputing
Estimating local optimums in EM algorithm over Gaussian mixture model
Proceedings of the 25th international conference on Machine learning
A BYY Split-and-Merge EM Algorithm for Gaussian Mixture Learning
ISNN '08 Proceedings of the 5th international symposium on Neural Networks: Advances in Neural Networks
Singularity and Slow Convergence of the EM algorithm for Gaussian Mixtures
Neural Processing Letters
Patch clustering for massive data sets
Neurocomputing
A Single Loop EM Algorithm for the Mixture of Experts Architecture
ISNN 2009 Proceedings of the 6th International Symposium on Neural Networks: Advances in Neural Networks - Part II
Scalable model-based cluster analysis using clustering features
Pattern Recognition
On the correct convergence of the EM algorithm for Gaussian mixtures
Pattern Recognition
Asymptotic convergence properties of the em algorithm for mixture of experts
Neural Computation
A robust EM clustering algorithm for Gaussian mixture models
Pattern Recognition
Adaptive quantization using piecewise companding and scaling for Gaussian mixture
Journal of Visual Communication and Image Representation
A multi-threshold segmentation approach based on Artificial Bee Colony optimization
Applied Intelligence
Hi-index | 0.00 |
It is well known that the convergence rate of the expectation-maximization (EM) algorithm can be faster than those of convention first-order iterative algorithms when the overlap in the given mixture is small. But this argument has not been mathematically proved yet. This article studies this problem asymptotically in the setting of gaussian mixtures under the theoretical framework of Xu and Jordan (1996). It has been proved that the asymptotic convergence rate of the EM algorithm for gaussian mixtures locally around the true solution Θ* is o(e0.5-ε(Θ*)), where ε 0 is an arbitrarily small number, o(x) means that it is a higher-order infinitesimal as x → 0, and e(Θ*) is a measure of the average overlap of gaussians in the mixture. In other words, the large sample local convergence rate for the EM algorithm tends to be asymptotically superlinear when e(Θ*) tends to zero.