Towards hierarchical context: unfolding visual community potential for interactive video retrieval

  • Authors:
  • Lin Pang;Juan Cao;Lei Bao;Yongdong Zhang;Shouxun Lin

  • Affiliations:
  • Graduate University of the Chinese Academy of Sciences, Beijing, China 100039 and Laboratory of Advanced Computing Research, Institute of Computing Technology, Chinese Academy of Sciences, Beijing ...;Laboratory of Advanced Computing Research, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China 100190;Laboratory of Advanced Computing Research, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China 100190;Laboratory of Advanced Computing Research, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China 100190;Laboratory of Advanced Computing Research, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China 100190

  • Venue:
  • Multimedia Tools and Applications
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Community structure as an interesting property of network has attracted wide attention from many research fields. In this paper, we exploit the visual community structure in visual-temporal correlation network and utilize it to improve interactive video retrieval. Firstly, we propose a hierarchical community-based feedback algorithm. By re-ranking the video shots through diffusion processes respectively on the inter-community and intra-community level, the feedback algorithm can make full use of the limited user feedback. Furthermore, since it avoids entire graph computation, the feedback algorithm can make quick responses to user feedback, which is particularly important for the large video collections. Secondly, we propose a community-based visualization interface called VideoMap. By organizing the video shots following the community structure, the VideoMap presents a comprehensive and informative view of the whole dataset to facilitate users' access. Moreover, the VideoMap can help users to quickly locate the potential relevant regions and make active annotation according to the distribution of labeled samples on the VideoMap. Experiments on TRECVID 2009 search dataset demonstrate the efficiency of the feedback algorithm and the effectiveness of the visualization interface.