Cluster-based genetic segmentation of time series with DWT
Pattern Recognition Letters
On cluster tree for nested and multi-density data clustering
Pattern Recognition
The variants of the harmony search algorithm: an overview
Artificial Intelligence Review
Hierarchical K-means clustering algorithm based on silhouette and entropy
AICI'11 Proceedings of the Third international conference on Artificial intelligence and computational intelligence - Volume Part I
Applying agglomerative fuzzy K-means to reduce the cost of telephone marketing
IUKM'11 Proceedings of the 2011 international conference on Integrated uncertainty in knowledge modelling and decision making
Determining the number of clusters using information entropy for mixed data
Pattern Recognition
An automatic index validity for clustering
ICSI'10 Proceedings of the First international conference on Advances in Swarm Intelligence - Volume Part II
Partitive clustering (K-means family)
Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery
Generalized agglomerative fuzzy clustering
ICONIP'12 Proceedings of the 19th international conference on Neural Information Processing - Volume Part III
Fast global k-means clustering based on local geometrical information
Information Sciences: an International Journal
CRUDAW: a novel fuzzy technique for clustering records following user defined attribute weights
AusDM '12 Proceedings of the Tenth Australasian Data Mining Conference - Volume 134
Fuzzy clustering with biological knowledge for gene selection
Applied Soft Computing
A dissimilarity measure based Fuzzy c-means FCM clustering algorithm
Journal of Intelligent & Fuzzy Systems: Applications in Engineering and Technology
Hi-index | 0.00 |
In this paper, we present an agglomerative fuzzy $k$-means clustering algorithm for numerical data, an extension to the standard fuzzy $k$-means algorithm by introducing a penalty term to the objective function to make the clustering process not sensitive to the initial cluster centers. The new algorithm can produce more consistent clustering results from different sets of initial clusters centers. Combined with cluster validation techniques, the new algorithm can determine the number of clusters in a data set, which is a well known problem in $k$-means clustering. Experimental results on synthetic data sets (2 to 5 dimensions, 500 to 5000 objects and 3 to 7 clusters), the BIRCH two-dimensional data set of 20000 objects and 100 clusters, and the WINE data set of 178 objects, 17 dimensions and 3 clusters from UCI, have demonstrated the effectiveness of the new algorithm in producing consistent clustering results and determining the correct number of clusters in different data sets, some with overlapping inherent clusters.