Clustering web images with multi-modal features

  • Authors:
  • Manjeet Rege;Ming Dong;Jing Hua

  • Affiliations:
  • Wayne State University, Detroit, MI;Wayne State University, Detroit, MI;Wayne State University, Detroit, MI

  • Venue:
  • Proceedings of the 15th international conference on Multimedia
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Web image clustering has drawn significant attention in the research community recently. However, not much work has been done in using multi-modal information for clustering Web images. In this paper, we address the problem of Web image clustering by simultaneous integration of visual and textual features from a graph partitioning perspective. In particular, we modelled visual features, images, and words from the surrounding text of the images using a tripartite graph. This graph is actually considered as a fusion of two bipartite graphs that are partitioned simultaneously by the proposed Consistent Isoperimetric High-order Co-clustering(CIHC) framework. Although a similar approach has been adopted before, the main contribution of this work lies in the computational efficiency, quality in Web image clustering and scalability to large image repositories that CIHC is able to achieve. We demonstrate this through experimental results performed on real Web images.