Asymptotic Convergence Rate of the EM Algorithm for Gaussian Mixtures

  • Authors:
  • Jinwen Ma;Lei Xu;Michael I. Jordan

  • Affiliations:
  • Department of Computer Science & Engineering, The Chinese University of Hong Kong, Shatin Hong Kong and Institute of Mathematics, Shantou University, Shantou, Guangdong, 515063, People's Republic ...;Department of Computer Science & Engineering, The Chinese University of Hong Kong, Shatin Hong Kong, People's Republic of China;Department of Computer Science and Department of Statistics, University of California at Berkeley, Berkeley, CA 94720, U.S.A.

  • Venue:
  • Neural Computation
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

It is well known that the convergence rate of the expectation-maximization (EM) algorithm can be faster than those of convention first-order iterative algorithms when the overlap in the given mixture is small. But this argument has not been mathematically proved yet. This article studies this problem asymptotically in the setting of gaussian mixtures under the theoretical framework of Xu and Jordan (1996). It has been proved that the asymptotic convergence rate of the EM algorithm for gaussian mixtures locally around the true solution Θ* is o(e0.5-ε(Θ*)), where ε 0 is an arbitrarily small number, o(x) means that it is a higher-order infinitesimal as x → 0, and e(Θ*) is a measure of the average overlap of gaussians in the mixture. In other words, the large sample local convergence rate for the EM algorithm tends to be asymptotically superlinear when e(Θ*) tends to zero.