Constructing visual tag dictionary by mining community-contributed media corpus

  • Authors:
  • Meng Wang;Kuiyuan Yang

  • Affiliations:
  • Hefei University of Technology, PR China;Microsoft Research Asia, Beijing 100080, PR China

  • Venue:
  • Neurocomputing
  • Year:
  • 2012

Quantified Score

Hi-index 0.01

Visualization

Abstract

Visual-word based image representation has shown effectiveness in a wide variety of applications such as categorization, annotation and search. By detecting keypoints in images and treating their patterns as visual words, an image can be represented as a bag of visual words, which is analogous to the bag-of-words representation of text documents. In this paper, we construct a corpus named visual tag dictionary by mining community-contributed media corpus. Unlike the conventional dictionaries that define terms with textual words, the visual tag dictionary interprets each tag with visual words. The dictionary is constructed in a fully automatic way by exploring community-contributed images and their associated tags. With this dictionary, tags and images are connected via visual words and many applications can be thus facilitated. As examples, we empirically demonstrate the effectiveness of the dictionary in tag-based image search, tag ranking, image annotation and tag graph construction. Empirical results demonstrate the effectiveness of the dictionary.