A comprehensive representation scheme for video semantic ontology and its applications in semantic concept detection

  • Authors:
  • Zheng-Jun Zha;Tao Mei;Yan-Tao Zheng;Zengfu Wang;Xian-Sheng Hua

  • Affiliations:
  • School of Computing, National University of Singapore, Singapore 117417, Singapore;Microsoft Research Asia, Beijing 100190, PR China;Institute for Infocomm Research, Singapore 138632, Singapore;Department of Automation, University of Science and Technology of China, Hefei 230027, PR China;Microsoft Research Asia, Beijing 100190, PR China

  • Venue:
  • Neurocomputing
  • Year:
  • 2012

Quantified Score

Hi-index 0.01

Visualization

Abstract

Recent research has discovered that leveraging ontology is an effective way to facilitate semantic video concept detection. As an explicit knowledge representation, a formal ontology definition usually consists of a lexicon, properties, and relations. In this paper, we present a comprehensive representation scheme for video semantic ontology in which all the three components are well studied. Specifically, we leverage LSCOM to construct the concept lexicon, describe concept property as the weights of different modalities which are obtained manually or by data-driven approach, and model two types of concept relations (i.e., pairwise correlation and hierarchical relation). In contrast with most existing ontologies which are only focused on one or two components for domain-specific videos, the proposed ontology is more comprehensive and general. To validate the effectiveness of this ontology, we further apply it to video concept detection. The experiments on TRECVID 2005 corpus have demonstrated a superior performance compared to existing key approaches to video concept detection.