Web video retagging

  • Authors:
  • Zhineng Chen;Juan Cao;Tian Xia;Yicheng Song;Yongdong Zhang;Jintao Li

  • Affiliations:
  • Center for Advanced Computing Research, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China 100190 and Graduate University of the Chinese Academy of Sciences, Beijing, C ...;Center for Advanced Computing Research, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China 100190;Center for Advanced Computing Research, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China 100190;Center for Advanced Computing Research, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China 100190 and Graduate University of the Chinese Academy of Sciences, Beijing, C ...;Center for Advanced Computing Research, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China 100190;Center for Advanced Computing Research, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China 100190

  • Venue:
  • Multimedia Tools and Applications
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Tags associated with web videos play a crucial role in organizing and accessing large-scale video collections. However, the raw tag list (RawL) is usually incomplete, imprecise and unranked, which reduces the usability of tags. Meanwhile, compared with studies on improving the quality of web image tags, tags associated with web videos are not studied to the same extent. In this paper, we propose a novel web video tag enhancement approach called video retagging, which aims at producing the more complete, precise, and ranked retagged tag list (RetL) for web videos. Given a web video, video retagging first collect its textually and visually related neighbor videos. All tags attached to the neighbors are treated as possible relevant ones and then RetL is generated by inferring the degree of relevance of the tags from both global and video-specific perspectives, using two different graph based models. Two kinds of experiments, i.e., application-oriented video search and categorization and user-based subjective studies are carried out on a large-scale web video dataset, which demonstrate that in most cases, RetL is better than RawL in terms of completeness, precision and ranking.