On convergence properties of the em algorithm for gaussian mixtures

  • Authors:
  • Lei Xu;Michael I. Jordan

  • Affiliations:
  • Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139 USA and Department of Computer Science, The Chinese University of Hong Kong, Hong Kong;Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139 USA

  • Venue:
  • Neural Computation
  • Year:
  • 1996

Quantified Score

Hi-index 0.02

Visualization

Abstract

We build up the mathematical connection between the “Expectation-Maximization” (EM) algorithm and gradient-based approaches for maximum likelihood learning of finite gaussian mixtures. We show that the EM step in parameter space is obtained from the gradient via a projection matrix P, and we provide an explicit expression for the matrix. We then analyze the convergence of EM in terms of special properties of P and provide new results analyzing the effect that P has on the likelihood surface. Based on these mathematical results, we present a comparative discussion of the advantages and disadvantages of EM and other algorithms for the learning of gaussian mixture models.