Efficiency of hierarchic agglomerative clustering using the ICL distributed array processor
Journal of Documentation
Parallel algorithms for hierarchical clustering
Parallel Computing
Automatic subspace clustering of high dimensional data for data mining applications
SIGMOD '98 Proceedings of the 1998 ACM SIGMOD international conference on Management of data
ParaStation: efficient parallel computing by clustering workstations: design and evaluation
Journal of Systems Architecture: the EUROMICRO Journal - Special double issue: cluster computing
Entropy-based subspace clustering for mining numerical data
KDD '99 Proceedings of the fifth ACM SIGKDD international conference on Knowledge discovery and data mining
CACTUS—clustering categorical data using summaries
KDD '99 Proceedings of the fifth ACM SIGKDD international conference on Knowledge discovery and data mining
ROCK: a robust clustering algorithm for categorical attributes
Information Systems
Data mining: concepts and techniques
Data mining: concepts and techniques
Cure: an efficient clustering algorithm for large databases
Information Systems
Clustering on a Hypercube Multicomputer
IEEE Transactions on Parallel and Distributed Systems
On distributing the clustering process
Pattern Recognition Letters
STING: A Statistical Information Grid Approach to Spatial Data Mining
VLDB '97 Proceedings of the 23rd International Conference on Very Large Data Bases
A VLSI Systolic Architecture for Pattern Clustering
IEEE Transactions on Pattern Analysis and Machine Intelligence
A parallel evolving algorithm for flexible neural tree
Parallel Computing
Survey of Clustering: Algorithms and Applications
International Journal of Information Retrieval Research
Hi-index | 0.01 |
The efficiency of clustering algorithms is strongly needed with very large databases and high-dimensional data types. As a solution, parallel algorithms can be used to provide powerful computing ability. PCs cluster system is one of low-cost general-purpose parallel computing systems. In this paper, we first theoretically analyze the idea of adopting data parallelism when designing a parallel clustering algorithm for PCs cluster systems, including analysis of speedup and selection of communication schemes. We then present a parallel hierarchical clustering algorithm called PARC. Experiment results demonstrate the correctness of the theoretical analysis and show that in general, PARC obtains as good quality of clustering as linear clustering algorithms, while communication time is considerably improved.