Tag Tagging: Towards More Descriptive Keywords of Image Content

  • Authors:
  • Kuiyuan Yang; Xian-Sheng Hua; Meng Wang; Hong-Jiang Zhang

  • Affiliations:
  • Dept. of Autom., Univ. of Sci. & Technol. of China, Hefei, China;-;-;-

  • Venue:
  • IEEE Transactions on Multimedia
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Tags have been demonstrated to be effective and efficient for organizing and searching social image content. However, these human-provided keywords are far from a comprehensive description of the image content, which limits their effectiveness in tag-based image search. In this paper, we propose an automatic scheme called tag tagging to supplement semantic image descriptions by associating a group of property tags with each existing tag. For example, an initial tag “tiger” may be further tagged with “white”, “stripes”, and “bottom-right” along three tag properties: color, texture, and location, respectively. In this way, the descriptive ability of the existing tags can be greatly enhanced. In the proposed scheme, a lazy learning approach is first applied to estimate the corresponding image regions of each initial tag, and then a set of property tags that correspond to six properties, including location, color, texture, size, shape, and dominance, are derived for each initial tag. These tag properties enable much more precise image search especially when certain tag properties are included in the query. The results of the empirical evaluation show that tag properties remarkably boost the performance of social image retrieval.