Competitive learning algorithms for vector quantization
Neural Networks
Color Image Segmentation using Competitive Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
Advances in knowledge discovery and data mining
Advances in knowledge discovery and data mining
Bayesian Ying-Yang machine, clustering and number of clusters
Pattern Recognition Letters - special issue on pattern recognition in practice V
The LBG-U Method for Vector Quantization – an Improvement over LBGInspired from Neural Networks
Neural Processing Letters
k-means: a new generalized k-means clustering algorithm
Pattern Recognition Letters
On Weight Design of Maximum Weighted Likelihood and an Extended EM Algorithm
IEEE Transactions on Knowledge and Data Engineering
Computational Intelligence and Security
A new feature selection method for Gaussian mixture clustering
Pattern Recognition
A batch rival penalized EM algorithm for Gaussian mixture clustering with automatic model selection
RSKT'07 Proceedings of the 2nd international conference on Rough sets and knowledge technology
ICANN'07 Proceedings of the 17th international conference on Artificial neural networks
A cooperative and penalized competitive learning approach to Gaussian mixture clustering
ICANN'10 Proceedings of the 20th international conference on Artificial neural networks: Part III
Fuzzy c-means improvement using relaxed constraints support vector machines
Applied Soft Computing
Hi-index | 0.00 |
Expectation-Maximization (EM) algorithm [10] has been extensively used in density mixture clustering problems, but it is unable to perform model selection automatically. This paper, therefore, proposes to learn the model parameters via maximizing a weighted likelihood. Under a specific weight design, we give out a Rival Penalized Expectation-Maximization (RPEM) algorithm, which makes the components in a density mixture compete each other at each time step. Not only are the associated parameters of the winner updated to adapt to an input, but also all rivals' parameters are penalized with the strength proportional to the corresponding posterior density probabilities. Compared to the EM algorithm [10], the RPEM is able to fade out the redundant densities from a density mixture during the learning process. Hence, it can automatically select an appropriate number of densities in density mixture clustering. We experimentally demonstrate its outstanding performance on Gaussian mixtures and color image segmentation problem. Moreover, a simplified version of RPEM generalizes our recently proposed RPCCL algorithm [8] so that it is applicable to elliptical clusters as well with any input proportion. Compared to the existing heuristic RPCL [25] and its variants, this generalized RPCCL (G-RPCCL) circumvents the difficult preselection of the so-called delearning rate. Additionally, a special setting of the G-RPCCL not only degenerates to RPCL and its Type A variant, but also gives a guidance to choose an appropriate delearning rate for them. Subsequently, we propose a stochastic version of RPCL and its Type A variant, respectively, in which the difficult selection problem of delearning rate has been novelly circumvented. The experiments show the promising results of this stochastic implementation.