Rushes summarization using different redundancy elimination approaches

  • Authors:
  • Narongsak Putpuek;Duy-Dinh Le;Nagul Cooharojananone;Shin'ichi Satoh;Chidchanok Lursinsap

  • Affiliations:
  • National Institute of Informatics, Tokyo, Japan and Chulalongkorn University, Bangkok, Thailand;National Institute of Informatics, Tokyo, Japan;National Institute of Informatics, Tokyo, Japan and Chulalongkorn University, Bangkok, Thailand;National Institute of Informatics, Tokyo, Japan;Chulalongkorn University, Bangkok, Thailand

  • Venue:
  • TVS '08 Proceedings of the 2nd ACM TRECVid Video Summarization Workshop
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Generating short summary videos for rushes is a challenging task due to the difficulty in eliminating redundancy and determining the important objects and events to be placed in the summary. Redundancy elimination is difficult since repetitive segments, which are takes of the same scene, usually have different lengths and motion patterns. This makes approaches using one keyframe for a shot representation fail when doing clustering. In addition, even repetitive segments can be precisely determined, but the summary generated by concatenating together the selected segments still takes longer than the upper limit. Selecting a sub-segment that conveys as much of the information concerning a given scene as possible might be a good way to improve this process. We introduce two approaches to solve these problems. In the first approach, one keyframe is used for representing a shot when doing clustering; and sub-segments are selected using the motion information for generating the summary. Meanwhile, in the second approach, all the frames of a given shot are used for clustering; and a simple skimming method is used to select the sub-segments. The experimental results on the TRECVID 2008 dataset and a comparison between the two approaches are also reported.