Integrating visual and semantic contexts for topic network generation and word sense disambiguation

  • Authors:
  • Jianping Fan;Hangzai Luo;Yi Shen;Chunlei Yang

  • Affiliations:
  • UNC-Charlotte, Charlotte, NC;East China Normal University, Shanghai, China;UNC-Charlotte, Charlotte, NC;UNC-Charlotte, Charlotte, NC

  • Venue:
  • Proceedings of the ACM International Conference on Image and Video Retrieval
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

To support more effective searches in large-scale weakly-tagged image collections, we have developed a novel algorithm to integrate both the visual similarity contexts between the images and the semantic similarity contexts between their tags for topic network generation and word sense disambiguation. First, a topic network is generated to characterize both the semantic similarity contexts and the visual similarity contexts between the image topics more sufficiently. By organizing large numbers of image topics according to their cross-modal inter-topic similarity contexts, our topic network can make the semantics behind the tag space more explicit, so that users can gain deep insights rapidly and formulate their queries more precisely. Second, our word sense disambiguation algorithm can integrate the topic network to exploit both the visual similarity contexts between the images and the semantic similarity contexts between their tags for addressing the issues of polysemes and synonyms more effectively, thus it can significantly improve the precision and recall rates for image retrieval. Our experiments on large-scale Flickr and LabelMe image collections have provided very positive results.