Known-item video search via query-to-modality mapping

  • Authors:
  • Kong-Wah Wan;Yan-Tao Zheng;Lekha Chaisorn

  • Affiliations:
  • Institute for Infocomm Research, Singapore, Singapore;Institute for Infocomm Research, Singapore, Singapore;Institute for Infocomm Research, Singapore, Singapore

  • Venue:
  • MM '11 Proceedings of the 19th ACM international conference on Multimedia
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

We introduce a novel query-to-modality mapping approach to the TRECVid 2010 known-Item video search (KIS) task. To search for a specific target video, a KIS query is verbose with many multi-modal attributes. Issuing all search terms to a retrieval engine will confuse the search criteria in different modalities and result in "topic drift". We propose decomposing a KIS query into a set of short uni-modal subqueries and issue them to the search index of the corresponding modality features, such as text-based metadata, visualbased high-level features. To do so, we introduce novel syntactic query features and cast the query-to-modality mapping as a classification problem. Retrieval results on the TRECVid 2010 KIS dataset shows that our approach outperforms existing methods by a significant margin.