Selecting diversifying heuristics for cluster ensembles

  • Authors:
  • Stefan T. Hadjitodorov;Ludmila I. Kuncheva

  • Affiliations:
  • CLBME, Bulgarian Academy of Sciences, Bulgaria;School of Computer Science, University of Wales, Bangor, UK

  • Venue:
  • MCS'07 Proceedings of the 7th international conference on Multiple classifier systems
  • Year:
  • 2007

Quantified Score

Hi-index 0.01

Visualization

Abstract

Cluster ensembles are deemed to be better than single clustering algorithms for discovering complex or noisy structures in data. Various heuristics for constructing such ensembles have been examined in the literature, e.g., random feature selection, weak clusterers, random projections, etc. Typically, one heuristic is picked at a time to construct the ensemble. To increase diversity of the ensemble, several heuristics may be applied together. However, not any combination may be beneficial. Here we apply a standard genetic algorithm (GA) to select from 7 standard heuristics for k-means cluster ensembles. The ensemble size is also encoded in the chromosome. In this way the data is forced to guide the selection of heuristics as well as the ensemble size. Eighteen moderate-size datasets were used: 4 artificial and 14 real. The results resonate with our previous findings in that high diversity is not necessarily a prerequisite for high accuracy of the ensemble. No particular combination of heuristics appeared to be consistently chosen across all datasets, which justifies the existing variety of cluster ensembles. Among the most often selected heuristics were random feature extraction, random feature selection and random number of clusters assigned for each ensemble member. Based on the experiments, we recommend that the current practice of using one or two heuristics for building k-means cluster ensembles should be revised in favour of using 3-5 heuristics.