Semi-automatic knowledge extraction, representation and context-sensitive intelligent retrieval of video content using collateral context modelling with scalable ontological networks

  • Authors:
  • Atta Badii;Chattun Lallah;Meng Zhu;Michael Crouch

  • Affiliations:
  • Intelligent Media Systems and Services Research Centre (IMSS), School of Systems Engineering, University of Reading, United Kingdom;Intelligent Media Systems and Services Research Centre (IMSS), School of Systems Engineering, University of Reading, United Kingdom;Intelligent Media Systems and Services Research Centre (IMSS), School of Systems Engineering, University of Reading, United Kingdom;Intelligent Media Systems and Services Research Centre (IMSS), School of Systems Engineering, University of Reading, United Kingdom

  • Venue:
  • Image Communication
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Automatic indexing and retrieval of digital data poses major challenges. The main problem arises from the ever increasing mass of digital media and the lack of efficient methods for indexing and retrieval of such data based on the semantic content rather than keywords. To enable intelligent web interactions, or even web filtering, we need to be capable of interpreting the information base in an intelligent manner. For a number of years research has been ongoing in the field of ontological engineering with the aim of using ontologies to add such (meta) knowledge to information. In this paper, we describe the architecture of a system (Dynamic REtrieval Analysis and semantic metadata Management (DREAM)) designed to automatically and intelligently index huge repositories of special effects video clips, based on their semantic content, using a network of scalable ontologies to enable intelligent retrieval. The DREAM Demonstrator has been evaluated as deployed in the film post-production phase to support the process of storage, indexing and retrieval of large data sets of special effects video clips as an exemplar application domain. This paper provides its performance and usability results and highlights the scope for future enhancements of the DREAM architecture which has proven successful in its first and possibly most challenging proving ground, namely film production, where it is already in routine use within our test bed Partners' creative processes.