Video search in concept subspace: a text-like paradigm

  • Authors:
  • Xirong Li;Dong Wang;Jianmin Li;Bo Zhang

  • Affiliations:
  • Tsinghua University, Beijing, China;Tsinghua University, Beijing, China;Tsinghua University, Beijing, China;Tsinghua University, Beijing, China

  • Venue:
  • Proceedings of the 6th ACM international conference on Image and video retrieval
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Though both quantity and quality of semantic concept detection in video are continuously improving, it still remains unclear how to exploit these detected concepts as semantic indices in video search, given a specific query. In this paper, we tackle this problem and propose a video search framework which operates like searching text documents. Noteworthy for its adoption of the well-founded text search principles, this framework first selects a few related concepts for a given query, by employing a tf-idf like scheme, called c-tf-idf, to measure the informativeness of the concepts to this query. These selected concepts form a concept subspace. Then search can be conducted in this concept subspace, either by a Vector Model or a Language Model. Further, two algorithms, i.e., Linear Summation and Random Walk through Concept-Link, are explored to combine the concept search results and other baseline search results in a reranking scheme. This framework is both effective and efficient. Using a lexicon of 311 concepts from the LSCOM concept ontology, experiments conducted on the TRECVID 2006 search data set show that: when used solely, search within the concept subspace achieves the state-of-the-art concept search result; when used to rerank the baseline results, it can improve over the top 20 automatic search runs in TRECVID 2006 on average by approx. 20%, on the most significant one by approx. 50%, all within 180 milliseconds on a normal PC.