BYY harmony learning, structural RPCL, and topological self-organizing on mixture models
Neural Networks - New developments in self-organizing maps
On the correct convergence of the EM algorithm for Gaussian mixtures
Pattern Recognition
A gradient BYY harmony learning algorithm on mixture of experts for curve detection
IDEAL'05 Proceedings of the 6th international conference on Intelligent Data Engineering and Automated Learning
Unsupervised image segmentation using an iterative entropy regularized likelihood learning algorithm
ISNN'06 Proceedings of the Third international conference on Advnaces in Neural Networks - Volume Part II
An overview of statistical learning theory
IEEE Transactions on Neural Networks
A Semi-supervised Learning Algorithm on Gaussian Mixture with Automatic Model Selection
Neural Processing Letters
ISNN '07 Proceedings of the 4th international symposium on Neural Networks: Advances in Neural Networks
ISNN '07 Proceedings of the 4th international symposium on Neural Networks: Advances in Neural Networks
Entropy regularization, automatic model selection, and unsupervised image segmentation
PAKDD'07 Proceedings of the 11th Pacific-Asia conference on Advances in knowledge discovery and data mining
A multi-threshold segmentation approach based on Artificial Bee Colony optimization
Applied Intelligence
Hi-index | 0.00 |
In Gaussian mixture modeling, it is crucial to select the number of Gaussians or mixture model for a sample data set. Under regularization theory, we aim to solve this kind of model selection problem through implementing entropy regularized likelihood (ERL) learning on Gaussian mixture via a batch gradient learning algorithm. It is demonstrated by the simulation experiments that this gradient ERL learning algorithm can select an appropriate number of Gaussians automatically during the parameter learning on a sample data set and lead to a good estimation of the parameters in the actual Gaussian mixture, even in the cases of two or more actual Gaussians overlapped strongly. We further give an adaptive gradient implementation of the ERL learning on Gaussian mixture followed with theoretic analysis, and find a mechanism of generalized competitive learning implied in the ERL learning.