An Efficient k-Means Clustering Algorithm: Analysis and Implementation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Refining Initial Points for K-Means Clustering
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
k-means: a new generalized k-means clustering algorithm
Pattern Recognition Letters
An efficient k'-means clustering algorithm
Pattern Recognition Letters
The mahalanobis distance based rival penalized competitive learning algorithm
ISNN'06 Proceedings of the Third international conference on Advances in Neural Networks - Volume Part I
A cost-function approach to rival penalized competitive learning (RPCL)
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Self-splitting competitive learning: a new on-line clustering paradigm
IEEE Transactions on Neural Networks
Rival penalized competitive learning for clustering analysis, RBF net, and curve detection
IEEE Transactions on Neural Networks
Kernel k'-means algorithm for clustering analysis
ICIC'13 Proceedings of the 9th international conference on Intelligent Computing Theories and Technology
Hi-index | 0.10 |
This paper proposes a new kind of k^'-means algorithms for clustering analysis with three frequency sensitive (data) discrepancy metrics in the cases that the exact number of clusters in a dataset is not pre-known. That is, by setting the number k of seed-points for learning clusters to be larger than the true number k^' of actual clusters in the dataset, i.e., kk^', these algorithms can locate the centers of k^' actual clusters by k^' converged seed-points, respectively, with the extra k-k^' seed-points corresponding to empty clusters, namely containing no winning points in the competition according to the underlying frequency sensitive discrepancy metrics. It is demonstrated by the experiments on both synthetic and real-world datasets that these three new k^'-means clustering algorithms can detect the number of actual clusters in a dataset with a classification accuracy rate as high as or higher than that of the original k^'-means algorithm. Moreover, they converge more quickly than the original one.