Video Summarization Based on Semantic Representation

  • Authors:
  • Rafael Paulin Carlos;Kuniaki Uehara

  • Affiliations:
  • -;-

  • Venue:
  • AMCP '98 Proceedings of the First International Conference on Advanced Multimedia Content Processing
  • Year:
  • 1998

Quantified Score

Hi-index 0.00

Visualization

Abstract

Summarization of video data is of growing practical importance because the more expanding video databases, inundate users with vast amounts of video data, the more users need reduced versions which they can assimilate with limited effort in shorter browsing time. In recent days, many researchers have investigated summarizing techniques, such as fast-forward play back, and skipping video frames at fixed intervals of time. However, all these techniques are based on syntactic aspects of the video. Another idea is to present summarized videos according its semantic representation. The critical aspect of compacting a video is context understanding, which is the key to choosing "significant scenes" that should be included in the summarized video. The goal of this work is to show the utility of semantic representation method for video summarization. We propose a method to extract significant scenes and create a summarized video without losing the content of the video's story. The story is analyzed by its semantic content and is represented in a structured graph where each scene is represented by affect units.