Algorithms for clustering data
Algorithms for clustering data
An efficient agglomerative clustering algorithm using a heap
Pattern Recognition
BIRCH: an efficient data clustering method for very large databases
SIGMOD '96 Proceedings of the 1996 ACM SIGMOD international conference on Management of data
On-line hierarchical clustering
Pattern Recognition Letters
Analysis of Effectiveness of Retrieval in Clustered Files
Journal of the ACM (JACM)
On Clustering Validation Techniques
Journal of Intelligent Information Systems
Incremental Clustering for Mining in a Data Warehousing Environment
VLDB '98 Proceedings of the 24rd International Conference on Very Large Data Bases
Efficient and Effective Clustering Methods for Spatial Data Mining
VLDB '94 Proceedings of the 20th International Conference on Very Large Data Bases
STING: A Statistical Information Grid Approach to Spatial Data Mining
VLDB '97 Proceedings of the 23rd International Conference on Very Large Data Bases
Validation indices for graph clustering
Pattern Recognition Letters - Special issue: Graph-based representations in pattern recognition
A New Cluster Isolation Criterion Based on Dissimilarity Increments
IEEE Transactions on Pattern Analysis and Machine Intelligence
Pattern Recognition, Third Edition
Pattern Recognition, Third Edition
Clustering by competitive agglomeration
Pattern Recognition
Hi-index | 0.00 |
In this paper, we proposed a new clustering method where each cluster is created based on its characteristic that we call texture. Extraction of the texture relies on measuring the similarity of neighboring patterns. Our proposed clustering algorithm consists of two stages. In the first stage, sub-clusters are created based on the similarity of their structures and in the second stage, the sub-clusters with similar textures and closeness of their distance are hierarchically combined to create a larger cluster. A theoretical justification for the proposed cluster isolation measure has been represented. Experimental results with complex data reveal that the performance of our method is superior than the well known K-means and Single Link methods. The proposed clustering algorithm is independent of the order of training samples appearance and its computational complexity is less than that of the traditional hierarchical algorithms.