Towards a unified framework for context-preserving video retrieval and summarization

  • Authors:
  • Nimit Pattanasri;Somchai Chatvichienchai;Katsumi Tanaka

  • Affiliations:
  • Department of Social Informatics, Kyoto University, Kyoto, Japan;Department of Info-Media, Siebold University of Nagasaki, Nagasaki, Japan;Department of Social Informatics, Kyoto University, Kyoto, Japan

  • Venue:
  • ICADL'05 Proceedings of the 8th international conference on Asian Digital Libraries: implementing strategies and sharing experiences
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Entirely watching separate video segments of interest or their summary might not be smooth enough nor comprehensible for viewers since contextual information between those segments may be lost. A unified framework for context-preserving video retrieval and summarization is proposed in order to solve this problem. Given a video database and ontologies specifying relationships among concepts used in MPEG-7 annotations, the objective is to identify according to a user query relevant segments together with summaries of contextual segments. Two types of contextual segments are defined: intra-contextual segments intended to form semantically coherent segments, and inter-contextual segments intended to semantically link together two separate segments. Relationships among verbs [3] are exploited to identify contextual segments as the relationships can provide the knowledge about events, causes and effects of actions over time. A query model and context-preserving video summarization are also presented.