Averaging, maximum penalized likelihood and Bayesian estimation for improving Gaussian mixture probability density estimates

  • Authors:
  • D. Ormoneit;V. Tresp

  • Affiliations:
  • Dept. of Comput. Sci., Tech. Univ. Munchen;-

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 1998

Quantified Score

Hi-index 0.00

Visualization

Abstract

We apply the idea of averaging ensembles of estimators to probability density estimation. In particular, we use Gaussian mixture models which are important components in many neural-network applications. We investigate the performance of averaging using three data sets. For comparison, we employ two traditional regularization approaches, i.e., a maximum penalized likelihood approach and a Bayesian approach. In the maximum penalized likelihood approach we use penalty functions derived from conjugate Bayesian priors such that an expectation maximization (EM) algorithm can be used for training. In all experiments, the maximum penalized likelihood approach and averaging improved performance considerably if compared to a maximum likelihood approach. In two of the experiments, the maximum penalized likelihood approach outperformed averaging. In one experiment averaging was clearly superior. Our conclusion is that maximum penalized likelihood gives good results if the penalty term in the cost function is appropriate for the particular problem. If this is not the case, averaging is superior since it shows greater robustness by not relying on any particular prior assumption. The Bayesian approach worked very well on a low-dimensional toy problem but failed to give good performance in higher dimensional problems