Scenario based dynamic video abstractions using graph matching

  • Authors:
  • JeongKyu Lee;JungHwan Oh;Sae Hwang

  • Affiliations:
  • University of Texas at Arlington, Arlington, TX;University of Texas at Arlington, Arlington, TX;University of Texas at Arlington, Arlington, TX

  • Venue:
  • Proceedings of the 13th annual ACM international conference on Multimedia
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we present scenario based dynamic video abstractions using graph matching. Our approach has two main components: multi-level scenario generations and dynamic video abstractions. Multi-level scenarios are generated by a graph-based video segmentation and a hierarchy of the segments. Dynamic video abstractions are accomplished by accessing the generated hierarchy level by level. The first step in the proposed approach is to segment a video into shots using Region Adjacency Graph (RAG). A RAG expresses spatial relationships among segmented regions of a frame. To measure the similarity between two consecutive RAGs, we propose a new similarity measure, called Graph Similarity Measure (GSM). Next, we construct a tree structure called scene tree based on the correlation between the detected shots. The correlation is computed by the GSM since it considers the relations between the detected shots properly. Multi-level scenarios which provide various levels of video abstractions are generated using the constructed scene tree. We provide two types of abstraction using multi-level scenarios: multi-level highlights and multi-length summarizations. Multi-level highlights are made by entire shots in each scenario level. To summarize a video in various lengths, we select key frames by considering temporal relationships among RAGs computed by the GSM. We have developed a system, called Automatic Video Analysis System (AVAS), by integrating the proposed techniques to show their effectiveness. The experimental results show that the proposed techniques are promising.