Constructing and Utilizing Video Ontology for Accurate and Fast Retrieval

  • Authors:
  • Kuniaki Uehara;Kimiaki Shirahama

  • Affiliations:
  • Kobe University, Japan;Kobe University, Japan

  • Venue:
  • International Journal of Multimedia Data Engineering & Management
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper examines video retrieval based on Query-By-Example QBE approach, where shots relevant to a query are retrieved from large-scale video data based on their similarity to example shots. This involves two crucial problems: The first is that similarity in features does not necessarily imply similarity in semantic content. The second problem is an expensive computational cost to compute the similarity of a huge number of shots to example shots. The authors have developed a method that can filter a large number of shots irrelevant to a query, based on a video ontology that is knowledge base about concepts displayed in a shot. The method utilizes various concept relationships e.g., generalization/specialization, sibling, part-of, and co-occurrence defined in the video ontology. In addition, although the video ontology assumes that shots are accurately annotated with concepts, accurate annotation is difficult due to the diversity of forms and appearances of the concepts. Dempster-Shafer theory is used to account the uncertainty in determining the relevance of a shot based on inaccurate annotation of this shot. Experimental results on TRECVID 2009 video data validate the effectiveness of the method.