Model-based evaluation of clustering validation measures

  • Authors:
  • Marcel Brun;Chao Sima;Jianping Hua;James Lowey;Brent Carroll;Edward Suh;Edward R. Dougherty

  • Affiliations:
  • Translational Genomics Research Institute, Phoenix, Arizona, USA;Department of Electrical Engineering, Texas A&M University, College Station, TX, USA;Translational Genomics Research Institute, Phoenix, Arizona, USA;Translational Genomics Research Institute, Phoenix, Arizona, USA;Department of Electrical and Computer Engineering, Rice University, Houston, TX, USA;Translational Genomics Research Institute, Phoenix, Arizona, USA;Translational Genomics Research Institute, Phoenix, Arizona, USA and Department of Electrical Engineering, Texas A&M University, College Station, TX, USA and Department of Pathology, University of ...

  • Venue:
  • Pattern Recognition
  • Year:
  • 2007

Quantified Score

Hi-index 0.01

Visualization

Abstract

A cluster operator takes a set of data points and partitions the points into clusters (subsets). As with any scientific model, the scientific content of a cluster operator lies in its ability to predict results. This ability is measured by its error rate relative to cluster formation. To estimate the error of a cluster operator, a sample of point sets is generated, the algorithm is applied to each point set and the clusters evaluated relative to the known partition according to the distributions, and then the errors are averaged over the point sets composing the sample. Many validity measures have been proposed for evaluating clustering results based on a single realization of the random-point-set process. In this paper we consider a number of proposed validity measures and we examine how well they correlate with error rates across a number of clustering algorithms and random-point-set models. Validity measures fall broadly into three classes: internal validation is based on calculating properties of the resulting clusters; relative validation is based on comparisons of partitions generated by the same algorithm with different parameters or different subsets of the data; and external validation compares the partition generated by the clustering algorithm and a given partition of the data. To quantify the degree of similarity between the validation indices and the clustering errors, we use Kendall's rank correlation between their values. Our results indicate that, overall, the performance of validity indices is highly variable. For complex models or when a clustering algorithm yields complex clusters, both the internal and relative indices fail to predict the error of the algorithm. Some external indices appear to perform well, whereas others do not. We conclude that one should not put much faith in a validity score unless there is evidence, either in terms of sufficient data for model estimation or prior model knowledge, that a validity measure is well-correlated to the error rate of the clustering algorithm.