SimClus: an effective algorithm for clustering with a lower bound on similarity

  • Authors:
  • Mohammad Al Hasan;Saeed Salem;Mohammed J. Zaki

  • Affiliations:
  • Indiana University–Purdue University, Indianapolis, IN, USA;North Dakota State University, Department of Computer Science, Fargo, ND, USA;Rensselaer Polytechnic Institute, Department of Computer Science, Troy, NY, USA

  • Venue:
  • Knowledge and Information Systems - Special Issue on Data Warehousing and Knowledge Discovery from Sensors and Streams
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Clustering algorithms generally accept a parameter k from the user, which determines the number of clusters sought. However, in many application domains, like document categorization, social network clustering, and frequent pattern summarization, the proper value of k is difficult to guess. An alternative clustering formulation that does not require k is to impose a lower bound on the similarity between an object and its corresponding cluster representative. Such a formulation chooses exactly one representative for every cluster and minimizes the representative count. It has many additional benefits. For instance, it supports overlapping clusters in a natural way. Moreover, for every cluster, it selects a representative object, which can be effectively used in summarization or semi-supervised classification task. In this work, we propose an algorithm, SimClus, for clustering with lower bound on similarity. It achieves a O(log n) approximation bound on the number of clusters, whereas for the best previous algorithm the bound can be as poor as O(n). Experiments on real and synthetic data sets show that our algorithm produces more than 40% fewer representative objects, yet offers the same or better clustering quality. We also propose a dynamic variant of the algorithm, which can be effectively used in an on-line setting.