Multigraph-based query-independent learning for video search

  • Authors:
  • Yuan Liu;Tao Mei;Xiuqing Wu;Xian-Sheng Hua

  • Affiliations:
  • Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei, China;Microsoft Research Asia, Beijing, China;Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei, China;Microsoft Research Asia, Beijing, China

  • Venue:
  • IEEE Transactions on Circuits and Systems for Video Technology
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Most of the existing learning-based methods for video search take query examples as "positive" and build a model for each query. These methods, referred to as query-dependent, only achieve limited success as users are mostly reluctant to provide enough query examples. To address this problem, we propose a novel query-independent learning approach based on multigraph to video search, which learns the relevance information existing in the query-shot pairs. The proposed approach, named MG-QIL, is more general and suitable for a real-world video search system as the learned relevance is independent of any queries. Specifically, MG-QIL constructs multiple graphs, including a main-graph covering all the pairs and a set of subgraphs covering the pairs within the same query. The pairs in the main-graph are connected in terms of relational similarity, while the pairs in the subgraphs for the same query are connected in terms of attributional similarity. The relevance labels are then propagated in the multiple graphs until convergence. We conducted extensive experiments on automatic search tasks over the TRECVID 2005-2007 benchmark and the results show a superior performance to state-of-the-art approaches to video search. Furthermore, when applied to video search reranking, MG-QIL can also achieve significant and consistent improvement over a text search baseline.