Convergence theory for fuzzy c-means: counterexamples and repairs
IEEE Transactions on Systems, Man and Cybernetics
Pattern Recognition Letters - Special issue on fuzzy set technology in pattern recognition
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Pattern Recognition with Fuzzy Objective Function Algorithms
Pattern Recognition with Fuzzy Objective Function Algorithms
Mean Shift, Mode Seeking, and Clustering
IEEE Transactions on Pattern Analysis and Machine Intelligence
Feature selection based on a modified fuzzy C-means algorithm with supervision
Information Sciences—Informatics and Computer Science: An International Journal
Feature Weighting in k-Means Clustering
Machine Learning
Improving fuzzy c-means clustering based on feature-weight learning
Pattern Recognition Letters
Automated Variable Weighting in k-Means Type Clustering
IEEE Transactions on Pattern Analysis and Machine Intelligence
General C-Means Clustering Model
IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence
Pattern Recognition
On convergence properties of the em algorithm for gaussian mixtures
Neural Computation
A unifying criterion for unsupervised clustering and feature selection
Pattern Recognition
K-Means-Type Algorithms: A Generalized Convergence Theorem and Characterization of Local Optimality
IEEE Transactions on Pattern Analysis and Machine Intelligence
A novel typical-sample-weighted clustering algorithm for large data sets
CIS'05 Proceedings of the 2005 international conference on Computational Intelligence and Security - Volume Part I
Optimality test for generalized FCM and its application to parameter selection
IEEE Transactions on Fuzzy Systems
Mathematical and Computer Modelling: An International Journal
Hi-index | 0.09 |
Although there have been many researches on cluster analysis considering feature (or variable) weights, little effort has been made regarding sample weights in clustering. In practice, not every sample in a data set has the same importance in cluster analysis. Therefore, it is interesting to obtain the proper sample weights for clustering a data set. In this paper, we consider a probability distribution over a data set to represent its sample weights. We then apply the maximum entropy principle to automatically compute these sample weights for clustering. Such method can generate the sample-weighted versions of most clustering algorithms, such as k-means, fuzzy c-means (FCM) and expectation & maximization (EM), etc. The proposed sample-weighted clustering algorithms will be robust for data sets with noise and outliers. Furthermore, we also analyze the convergence properties of the proposed algorithms. This study also uses some numerical data and real data sets for demonstration and comparison. Experimental results and comparisons actually demonstrate that the proposed sample-weighted clustering algorithms are effective and robust clustering methods.