What are the most eye-catching and ear-catching features in the video?: implications for video summarization

  • Authors:
  • Yaxiao Song;Gary Marchionini;Chi Young Oh

  • Affiliations:
  • University of North Carolina at Chapel Hill, Chapel Hill, NC, USA;University of North Carolina at Chapel Hill, Chapel Hill, NC, USA;University of North Carolina at Chapel Hill, Chapel Hill, NC, USA

  • Venue:
  • Proceedings of the 19th international conference on World wide web
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Video summarization is a mechanism for generating short summaries of the video to help people quickly make sense of the content of the video before downloading or seeking more detailed information. To produce reliable automatic video summarization algorithms, it is essential to first understand how human beings create video summaries with manual efforts. This paper focuses on a corpus of instructional documentary video, and seeks to improve automatic video summaries by understanding what features in the video catch the eyes and ears of human assessors, and using these findings to inform automatic summarization algorithms. The paper contributes a thorough and valuable methodology for performing automatic video summarization, and the methodology can be extended to inform summarization of other video corpuses.