Video summarization preserving dynamic content

  • Authors:
  • Francine Chen;Matthew Cooper;John Adcock

  • Affiliations:
  • FX Palo Alto Laboratory Inc., Palo Alto, CA;FX Palo Alto Laboratory Inc., Palo Alto, CA;FX Palo Alto Laboratory Inc., Palo Alto, CA

  • Venue:
  • Proceedings of the international workshop on TRECVID video summarization
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper describes a system for selecting excerpts from unedited video and presenting the excerpts in a short summary video for efficiently understanding the video contents. Color and motion features are used to divide the video into segments where the color distribution and camera motion are similar. Segments with and without camera motion are clustered separately to identify redundant video. Audio features are used to identify clapboard appearances for exclusion. Representative segments from each cluster are selected for presentation. To increase the original material contained within the summary and reduce the time required to view the summary, selected segments are played back at a higher rate based on the amount of detected camera motion in the segment. Pitch-preserving audio processing is used to better capture the sense of the original audio. Metadata about each segment is overlayed on the summary to help the viewer understand the context of the summary segments in the original video.