Using consensus sequence voting to correct OCR errors
Computer Vision and Image Understanding
Relaxing the Triangle Inequality in Pattern Matching
International Journal of Computer Vision
On Median Graphs: Properties, Algorithms, and Applications
IEEE Transactions on Pattern Analysis and Machine Intelligence - Graph Algorithms and Computer Vision
Optimal Lower Bound for Generalized Median Problems in Metric Space
Proceedings of the Joint IAPR International Workshop on Structural, Syntactic, and Statistical Pattern Recognition
Performance criteria for graph clustering and Markov cluster experiments
Performance criteria for graph clustering and Markov cluster experiments
Cluster ensembles --- a knowledge reuse framework for combining multiple partitions
The Journal of Machine Learning Research
Combining Multiple Clusterings Using Evidence Accumulation
IEEE Transactions on Pattern Analysis and Machine Intelligence
ACM Transactions on Knowledge Discovery from Data (TKDD)
Comparing clusterings---an information based distance
Journal of Multivariate Analysis
Constrained Clustering: Advances in Algorithms, Theory, and Applications
Constrained Clustering: Advances in Algorithms, Theory, and Applications
Weighted Cluster Ensemble Using a Kernel Consensus Function
CIARP '08 Proceedings of the 13th Iberoamerican congress on Pattern Recognition: Progress in Pattern Recognition, Image Analysis and Applications
Ensemble Clustering via Random Walker Consensus Strategy
ICPR '10 Proceedings of the 2010 20th International Conference on Pattern Recognition
Survey of clustering algorithms
IEEE Transactions on Neural Networks
Ensemble clustering by means of clustering embedding in vector spaces
Pattern Recognition
Hi-index | 0.00 |
Cluster ensemble techniques are a means for boosting the clustering performance. However, many cluster ensemble methods are faced with high computational complexity. Indeed, the median partition methods are NP-complete. While a variety of approximative approaches for suboptimal solutions have been proposed in the literature, the performance evaluation is typically done by means of ground truth. In contrast this work explores the question how well the cluster ensemble methods perform in an absolute sense without ground truth, i.e. how they compare to the (unknown) optimal solution. We present a study of applying and extending a lower bound as an attempt to answer the question. In particular, we demonstrate the tightness of the lower bound, which indicates that there exists no more room for further improvement (for the particular data set at hand). The lower bound can thus be considered as a means of exploring the performance limit of cluster ensemble techniques.