An empirical comparison of four initialization methods for the K-Means algorithm
Pattern Recognition Letters
Clustering For Data Mining: A Data Recovery Approach (Chapman & Hall/Crc Computer Science)
Clustering For Data Mining: A Data Recovery Approach (Chapman & Hall/Crc Computer Science)
Automated Variable Weighting in k-Means Type Clustering
IEEE Transactions on Pattern Analysis and Machine Intelligence
Initializing K-means Batch Clustering: A Critical Evaluation of Several Techniques
Journal of Classification
Constrained Intelligent K-Means: Improving Results with Limited Previous Knowledge.
ADVCOMP '08 Proceedings of the 2008 The Second International Conference on Advanced Engineering Computing and Applications in Sciences
An empirical evaluation of different initializations on the number of k-means iterations
MICAI'12 Proceedings of the 11th Mexican international conference on Advances in Artificial Intelligence - Volume Part I
Hi-index | 0.00 |
Minkowski Weighted K-Means is a variant of K-Means set in the Minkowski space, automatically computing weights for features at each cluster. As a variant of K-Means, its accuracy heavily depends on the initial centroids fed to it. In this paper we discuss our experiments comparing six initializations, random and five other initializations in the Minkowski space, in terms of their accuracy, processing time, and the recovery of the Minkowski exponent p. We have found that the Ward method in the Minkowski space tends to outperform other initializations, with the exception of low-dimensional Gaussian Models with noise features. In these, a modified version of intelligent K-Means excels.