Heterogeneous image feature integration via multi-modal spectral clustering

  • Authors:
  • Xiao Cai; Feiping Nie; Heng Huang;F. Kamangar

  • Affiliations:
  • Comput. Sci. & Eng. Dept., Univ. of Texas at Arlington, Arlington, TX, USA;Comput. Sci. & Eng. Dept., Univ. of Texas at Arlington, Arlington, TX, USA;Comput. Sci. & Eng. Dept., Univ. of Texas at Arlington, Arlington, TX, USA;Comput. Sci. & Eng. Dept., Univ. of Texas at Arlington, Arlington, TX, USA

  • Venue:
  • CVPR '11 Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

In recent years, more and more visual descriptors have been proposed to describe objects and scenes appearing in images. Different features describe different aspects of the visual characteristics. How to combine these heterogeneous features has become an increasing critical problem. In this paper, we propose a novel approach to unsupervised integrate such heterogeneous features by performing multi-modal spectral clustering on unlabeled images and unsegmented images. Considering each type of feature as one modal, our new multi-modal spectral clustering (MMSC) algorithm is to learn a commonly shared graph Laplacian matrix by unifying different modals (image features). A non-negative relaxation is also added in our method to improve the robustness and efficiency of image clustering. We applied our MMSC method to integrate five types of popularly used image features, including SIFT, HOG, GIST, LBP, CENTRIST and evaluated the performance by two benchmark data sets: Caltech-101 and MSRC-v1. Compared with existing unsupervised scene and object categorization methods, our approach always achieves superior performances measured by three standard clustering evaluation metrices.