Comparative Analysis on Convergence Rates of The EM Algorithm and Its Two Modifications for Gaussian Mixtures

  • Authors:
  • Lei Xu

  • Affiliations:
  • Dept of Computer Science and Engineering, The Chinese University of Hong Kong, Shatin, NT, Hong Kong, P.R. China Email: lxu@cs.cuhk.edu.hk

  • Venue:
  • Neural Processing Letters
  • Year:
  • 1997

Quantified Score

Hi-index 0.00

Visualization

Abstract

For Gaussian mixture, a comparative analysis has been made on the convergence rate by the Expectation-Maximization (EM) algorithm and its two types of modifications. One is a variant of the EM algorithm (denoted by VEM) which uses the old value of mean vectors instead of the latest updated one in the current updating of the covariance matrices. The other is obtained by adding a momentum term in the EM updating equation, called the Momentum EM algorithm (MEM). Their up-bound convergence rates have been obtained, including an extension and a modification of those given in Xu & Jordan (1996). It has been shown that the EM algorithm and VEM are equivalent in their local convergence and rates, and that the MEM can speed up the convergence of the EM algorithm if a suitable amount of momentum is added. Moreover, a theoretical guide on how to add momentum is proposed, and a possible approach for further speeding up the convergence is suggested.