Probability Density Estimation Using Adaptive Activation Function Neurons
Neural Processing Letters
Mixture models of categorization
Journal of Mathematical Psychology
Resolution-Based Complexity Control for Gaussian Mixture Models
Neural Computation
A Segmentation Method for Digital Images Based on Cluster Analysis
ICANNGA '07 Proceedings of the 8th international conference on Adaptive and Natural Computing Algorithms, Part II
A customized Gabor filter for unsupervised color image segmentation
Image and Vision Computing
Population pharmacokinetic/pharmacodynamic mixture models via maximum a posteriori estimation
Computational Statistics & Data Analysis
Classification Based on Combination of Kernel Density Estimators
ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part II
Data classification with a generalized Gaussian components based density estimation algorithm
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
A statistical approach for fast mode decision in scalable video coding
IEEE Transactions on Circuits and Systems for Video Technology
Classification based on multiple-resolution data view
ICANN'10 Proceedings of the 20th international conference on Artificial neural networks: Part III
ICONIP'06 Proceedings of the 13th international conference on Neural Information Processing - Volume Part II
ADA'04 Proceedings of the 3rd international conference on Astronomical Data Analysis
Supervised learning of Gaussian mixture models for visual vocabulary generation
Pattern Recognition
Hi-index | 0.00 |
We apply the idea of averaging ensembles of estimators to probability density estimation. In particular, we use Gaussian mixture models which are important components in many neural-network applications. We investigate the performance of averaging using three data sets. For comparison, we employ two traditional regularization approaches, i.e., a maximum penalized likelihood approach and a Bayesian approach. In the maximum penalized likelihood approach we use penalty functions derived from conjugate Bayesian priors such that an expectation maximization (EM) algorithm can be used for training. In all experiments, the maximum penalized likelihood approach and averaging improved performance considerably if compared to a maximum likelihood approach. In two of the experiments, the maximum penalized likelihood approach outperformed averaging. In one experiment averaging was clearly superior. Our conclusion is that maximum penalized likelihood gives good results if the penalty term in the cost function is appropriate for the particular problem. If this is not the case, averaging is superior since it shows greater robustness by not relying on any particular prior assumption. The Bayesian approach worked very well on a low-dimensional toy problem but failed to give good performance in higher dimensional problems