Building a Multi-Modal Thesaurus from Annotated Images

  • Authors:
  • Hichem Frigui;Joshua Caudill

  • Affiliations:
  • CECS Dept. University of louisville;CECS Dept. University of louisville

  • Venue:
  • ICPR '06 Proceedings of the 18th International Conference on Pattern Recognition - Volume 04
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose an unsupervised approach to learn associations between low-level visual features and keywords. We assume that a collection of images is available and that each image is globally annotated. The objective is to extract representative visual profiles that correspond to frequent homogeneous regions, and to associate them with keywords. These labeled profiles would be used to build a multi-modal thesaurus that could serve as a foundation for hybrid navigation and search algorithms. Our approach has two main steps. First, each image is coarsely segmented into regions, and visual features are extracted from each region. Second, the regions are categorized using a novel algorithm that performs clustering and feature weighting simultaneously. As a result, we obtain clusters of regions that share subsets of relevant features. Representatives from each cluster and their relevant visual and textual features would be used to build a thesaurus. The proposed approach is validated using a collection of 1169 images.