SHIATSU: tagging and retrieving videos without worries

  • Authors:
  • Ilaria Bartolini;Marco Patella;Corrado Romani

  • Affiliations:
  • DEIS, Alma Mater Studiorum, Università di Bologna, Bologna, Italy;DEIS, Alma Mater Studiorum, Università di Bologna, Bologna, Italy;DEIS, Alma Mater Studiorum, Università di Bologna, Bologna, Italy

  • Venue:
  • Multimedia Tools and Applications
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

The dramatic growth of video content over modern media channels (such as the Internet and mobile phone platforms) directs the interest of media broadcasters towards the topics of video retrieval and content browsing. Several video retrieval systems benefit from the use of semantic indexing based on content, since it allows an intuitive categorization of videos. However, indexing is usually performed through manual annotation, thus introducing potential problems such as ambiguity, lack of information, and non-relevance of index terms. In this paper, we present SHIATSU, a complete system for video retrieval which is based on the (semi-)automatic hierarchical semantic annotation of videos exploiting the analysis of visual content; videos can then be searched by means of attached tags and/or visual features. We experimentally evaluate the performance of SHIATSU on two different real video benchmarks, proving its accuracy and efficiency.