A gradient entropy regularized likelihood learning algorithm on gaussian mixture with automatic model selection

  • Authors:
  • Zhiwu Lu;Jinwen Ma

  • Affiliations:
  • Institute of Computer Science & Technology of Peking University, Beijing, China;Department of Information Science, School of Mathematical Sciences and LMAM, Peking University, Beijing, China

  • Venue:
  • ISNN'06 Proceedings of the Third international conference on Advances in Neural Networks - Volume Part I
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

In Gaussian mixture (GM) modeling, it is crucial to select the number of Gaussians for a sample data set. In this paper, we propose a gradient entropy regularized likelihood (ERL) algorithm on Gaussian mixture to solve this problem under regularization theory. It is demonstrated by the simulation experiments that the gradient ERL learning algorithm can select an appropriate number of Gaussians automatically during the parameter learning on a sample data set and lead to a good estimation of the parameters in the actual Gaussian mixture, even in the cases of two or more actual Gaussians overlapped strongly.