BIRCH: an efficient data clustering method for very large databases
SIGMOD '96 Proceedings of the 1996 ACM SIGMOD international conference on Management of data
CURE: an efficient clustering algorithm for large databases
SIGMOD '98 Proceedings of the 1998 ACM SIGMOD international conference on Management of data
A monothetic clustering method
Pattern Recognition Letters
ACM Computing Surveys (CSUR)
ROCK: a robust clustering algorithm for categorical attributes
Information Systems
A Modified Version of the K-Means Algorithm with a Distance Based on Cluster Symmetry
IEEE Transactions on Pattern Analysis and Machine Intelligence
Extensions to the k-Means Algorithm for Clustering Large Data Sets with Categorical Values
Data Mining and Knowledge Discovery
An Efficient k-Means Clustering Algorithm: Analysis and Implementation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Collaborative fuzzy clustering
Pattern Recognition Letters
An Efficient Fuzzy C-Means Clustering Algorithm
ICDM '01 Proceedings of the 2001 IEEE International Conference on Data Mining
Generalized fuzzy c-means clustering strategies using Lp norm distances
IEEE Transactions on Fuzzy Systems
Reducing the time complexity of the fuzzy c-means algorithm
IEEE Transactions on Fuzzy Systems
Survey of clustering algorithms
IEEE Transactions on Neural Networks
An architecture to efficiently learn co-similarities from multi-view datasets
ICONIP'12 Proceedings of the 19th international conference on Neural Information Processing - Volume Part I
Nonlinear multicriteria clustering based on multiple dissimilarity matrices
Pattern Recognition
Hi-index | 0.01 |
This paper introduces hard clustering algorithms that are able to partition objects taking into account simultaneously their relational descriptions given by multiple dissimilarity matrices. These matrices have been generated using different sets of variables and dissimilarity functions. These methods are designed to furnish a partition and a prototype for each cluster as well as to learn a relevance weight for each dissimilarity matrix by optimizing an adequacy criterion that measures the fitting between the clusters and their representatives. These relevance weights change at each algorithm iteration and can either be the same for all clusters or different from one cluster to another. Experiments with data sets (synthetic and from UCI machine learning repository) described by real-valued variables as well as with time trajectory data sets show the usefulness of the proposed algorithms.