Letter Recognition Using Holland-Style Adaptive Classifiers
Machine Learning
Fast ISODATA clustering algorithms
Pattern Recognition
ACM Computing Surveys (CSUR)
An empirical comparison of four initialization methods for the K-Means algorithm
Pattern Recognition Letters
A clustering algorithm based on graph connectivity
Information Processing Letters
Mean Shift: A Robust Approach Toward Feature Space Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Mean Shift, Mode Seeking, and Clustering
IEEE Transactions on Pattern Analysis and Machine Intelligence
X-means: Extending K-means with Efficient Estimation of the Number of Clusters
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Robust analysis of feature spaces: color image segmentation
CVPR '97 Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97)
Pattern Recognition and Machine Learning (Information Science and Statistics)
Pattern Recognition and Machine Learning (Information Science and Statistics)
A tutorial on spectral clustering
Statistics and Computing
Data clustering: 50 years beyond K-means
Pattern Recognition Letters
Inferring parameters and structure of latent variable models by variational bayes
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
An Evolutionary Approach to Multiobjective Clustering
IEEE Transactions on Evolutionary Computation
Color Image Segmentation Based on Mean Shift and Normalized Cuts
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
The estimation of the gradient of a density function, with applications in pattern recognition
IEEE Transactions on Information Theory
A self-organizing network for hyperellipsoidal clustering (HEC)
IEEE Transactions on Neural Networks
Hi-index | 0.01 |
We present a novel unsupervised algorithm for quickly finding clusters in multi-dimensional data. It does not make the assumption of isotropy, instead taking full advantage of the anisotropic Gaussian kernel, to adapt to local data shape and scale. We employ some little-used properties of the multivariate Gaussian distribution to represent the data, and also give, as a corollary of the theory we formulate, a simple yet principled means of preventing singularities in Gaussian models. The efficacy and robustness of the proposed method are demonstrated on both real and artificial data, providing qualitative and quantitative results, and comparing against the well known mean-shift and K-means algorithms.