Space-Time Video Montage

  • Authors:
  • Hong-Wen Kang;Xue-Quan Chen;Yasuyuki Matsushita;Xiaoou Tang

  • Affiliations:
  • University of Science and Technology of China, Hefei, China;University of Science and Technology of China, Hefei, China;Microsoft Research Asia, Beijing, China;Microsoft Research Asia, Beijing, China

  • Venue:
  • CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Conventional video summarization methods focus predominantly on summarizing videos along the time axis, such as building a movie trailer: The resulting video trailer tends to retain much empty space in the background of the video frames while discarding much informative video content due to size limit. In this paper we propose a novel spacetime video summarization method which we call space-time video montage. The method simultaneously analyzes both the spatial and temporal injbrmation distribution in a video sequence, and extracts the visually informative space-time portions of the input videos. The informative video porlions are represented in volumetric layers. The layers are then packrd together in a smull ouzput video volume such that the total amount of visual information in the video volume is maximized. To achieve the packing process, we develop a new algorithm based upon the first-fit and Graph cut optimization techniques. Since our method is uble to cut off spatially und temporally less informative portions, it is uble to generate much more compact yet highly informative output videos. The effecliveness of our method is validated by extensive experiments over a wide variety of videos.