Choosing starting values for the EM algorithm for getting the highest likelihood in multivariate Gaussian mixture models

  • Authors:
  • Christophe Biernacki;Gilles Celeux;Gérard Govaert

  • Affiliations:
  • UMR CNRS 6623, Université de Franche-Comté, France;INRIA Rhône-Alpes, France;UMR CNRS 6599, UTC Compiègne, France

  • Venue:
  • Computational Statistics & Data Analysis
  • Year:
  • 2003

Quantified Score

Hi-index 0.03

Visualization

Abstract

Simple methods to choose sensible starting values for the EM algorithm to get maximum likelihood parameter estimation in mixture models are compared. They are based on random initialization, using a classification EM algorithm (CEM), a Stochastic EM algorithm (SEM) or previous short runs of EM itself. Those initializations are included in a search/run/select strategy which can be compounded by repeating the three steps. They are compared in the context of multivariate Gaussian mixtures on the basis of numerical experiments on both simulated and real data sets in a target number of iterations. The main conclusions of those numerical experiments are the following. The simple random initialization which is probably the most employed way of initiating EM is often outperformed by strategies using CEM, SEM or shorts runs of EM before running EM. Also, it appears that compounding is generally profitable since using a single run of EM can often lead to suboptimal solutions. Otherwise, none of the experimental strategies can be regarded as the best one and it is difficult to characterize situations where a particular strategy can be expected to outperform the other ones. However, the strategy initiating EM with short runs of EM can be recommended. This strategy, which as far as we know was not used before the present study, has some advantages. It is simple, performs well in a lot of situations presupposing no particular form of the mixture to be fitted to the data and seems little sensitive to noisy data.