ShotTagger: tag location for internet videos

  • Authors:
  • Guangda Li;Meng Wang;Yan-Tao Zheng;Haojie Li;Zheng-Jun Zha;Tat-Seng Chua

  • Affiliations:
  • NUS Graduate School for Integrative Sciences and Engineering and National University of Singapore;National University of Singapore;National University of Singapore;Dalian University of Technology;National University of Singapore;National University of Singapore

  • Venue:
  • Proceedings of the 1st ACM International Conference on Multimedia Retrieval
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Social video sharing websites allow users to annotate videos with descriptive keywords called tags, which greatly facilitate video search and browsing. However, many tags only describe part of the video content, without any temporal indication on when the tag actually appears. Currently, there is very little research on automatically assigning tags to shot-level segments of a video. In this paper, we leverage user's tags as a source to analyze the content within the video and develop a novel system named ShotTagger to assign tags at the shot level. There are two steps to accomplish the location of tags at shot level. The first is to estimate the distribution of tags within the video, which is based on a multiple instance learning framework. The second is to perform the semantic correlation of a tag with other tags in a video in an optimization framework and impose the temporal smoothness across adjacent video shots to refine the tagging results at shot level. We present different applications to demonstrate the usefulness of the tag location scheme in searching, and browsing of videos. A series of experiments conducted on a set of Youtube videos has demonstrated the feasibility and effectiveness of our approach.