Learning structured concept-segments for interactive video retrieval

  • Authors:
  • Zhikun Wang;Dong Wang;Jianmin Li;Bo Zhang

  • Affiliations:
  • Tsinghua University, Beijing, China;Tsinghua University, Beijing, China;Tsinghua University, Beijing, China;Tsinghua University, Beijing, China

  • Venue:
  • CIVR '08 Proceedings of the 2008 international conference on Content-based image and video retrieval
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Now with a large lexicon of over 300 semantic concepts available for indexing purpose, video retrieval can be made easier by leveraging on the available semantic indices. However, any successful concept-based video retrieval approach must take the following into account: though improving continuously, these concept indexing results are still far from perfect; more concepts are awaiting for detection instead of being detected due to the limited amount of annotated data. If possible, a structured query formulation other than a simple AND logic of some chosen concepts is more desirable to model the complex query need with the fixed concept lexicon. In this paper, we propose a concept-based interactive video retrieval approach to tackle these problems. To better represent the query information need, the proposed approach learns through the feedback information a structured formulation which consists of multiple semantic concept combination terms. Instead of taking the top-ranked items from the selected concepts, it leverages on a simple mining algorithm to drill down to concept-segments where the positive examples are most densely populated than the negative examples. We evaluate the proposed method on the large scale TRECVid 05&06 data sets, and achieve promising results. Retrieval in concept-segment level has a 14% improvement upon the concept-level. Structured query formulation improves around 13% compared with the simple logical AND formulation. The learning and retrieval process only takes 300ms, satisfying the real-time interactive search need.