Pattern Recognition
Automatic subspace clustering of high dimensional data for data mining applications
SIGMOD '98 Proceedings of the 1998 ACM SIGMOD international conference on Management of data
Fast algorithms for projected clustering
SIGMOD '99 Proceedings of the 1999 ACM SIGMOD international conference on Management of data
Finding generalized projected clusters in high dimensional spaces
SIGMOD '00 Proceedings of the 2000 ACM SIGMOD international conference on Management of data
Learning Mixtures of Gaussians
FOCS '99 Proceedings of the 40th Annual Symposium on Foundations of Computer Science
A Hierarchical Projection Pursuit Clustering Algorithm
ICPR '04 Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 1 - Volume 01
Useful clustering outcomes from meaningful time series clustering
AusDM '07 Proceedings of the sixth Australasian conference on Data mining and analytics - Volume 70
ACM Transactions on Knowledge Discovery from Data (TKDD)
Spectral clustering on manifolds with statistical and geometrical similarity
ISNN'10 Proceedings of the 7th international conference on Advances in Neural Networks - Volume Part I
Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery
Random walk distances in data clustering and applications
Advances in Data Analysis and Classification
Hi-index | 0.00 |
In this paper we describe a new cluster model which is based on the concept of linear manifolds. The method identifies subsets of the data which are embedded in arbitrary oriented lower dimensional linear manifolds. Minimal subsets of points are repeatedly sampled to construct trial linear manifolds of various dimensions. Histograms of the distances of the points to each trial manifold are computed. The sampling corresponding to the histogram having the best separation between a mode near zero and the rest is selected and the data points are partitioned on the basis of the best separation. The repeated sampling then continues recursively on each block of the partitioned data. A broad evaluation of some hundred experiments over real and synthetic data sets demonstrates the general superiority of this algorithm over any of the competing algorithms in terms of stability, accuracy, and computation time.