Exploring inter-concept relationship with context space for semantic video indexing

  • Authors:
  • Xiao-Yong Wei;Yu-Gang Jiang;Chong-Wah Ngo

  • Affiliations:
  • City University of Hong Kong, Kowloon, Hong Kong;City University of Hong Kong, Kowloon, Hong Kong;City University of Hong Kong, Kowloon, Hong Kong

  • Venue:
  • Proceedings of the ACM International Conference on Image and Video Retrieval
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Semantic concept detectors are often individually and independently developed. Using peripherally related concepts for leveraging the power of joint detection, which is referred to as context-based concept fusion (CBCF), has been one of the focus studies in recent years. This paper proposes the construction of a context space and the exploration of the space for CBCF. Context space considers the global consistency of concept relationship, addresses the problem of missing annotation, and is extensible for cross-domain contextual fusion. The space is linear and can be built by modeling the inter-concept relationship through annotation provided by either manual labeling or machine tagging. With context space, CBCF becomes a problem of concept selection and detector fusion, under which the significance of a concept/detector can be adapted when applied to a target domain different from where the detector is being developed. Experiments on TRECVID datasets of years 2005 to 2008 confirm the usefulness of context space for CBCF. We observe a consistent improvement of 2.8% to 38.8% for concept detection when context space is used, and more importantly, with significant speed-up compared to existing approaches.