The trecvid 2008 BBC rushes summarization evaluation

  • Authors:
  • Paul Over;Alan F. Smeaton;George Awad

  • Affiliations:
  • National Institute of Standards and Technology, Gaithersburg, MD, USA;Dublin City University, Dublin, Ireland;National Institute of Standards and Technology, Gaithersburg, MD, USA

  • Venue:
  • TVS '08 Proceedings of the 2nd ACM TRECVid Video Summarization Workshop
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper describes an evaluation of automatic video summarization systems run on rushes from several BBC dramatic series. It was carried out under the auspices of the TREC Video Retrieval Evaluation (TRECVid) as a followup to the 2007 video summarization workshop held at ACM Multimedia 2007. 31 research teams submitted video summaries of 40 individual rushes video files, aiming to compress out redundant and insignificant material. Each summary had a duration of at most 2% of the original. The output of a baseline system, which simply presented each full video at 50 times normal speed was contributed by Carnegie Mellon University (CMU) as a control. The 2007 procedures for developing ground truth lists of important segments from each video were applied at the National Institute of Standards and Technology (NIST) to the BBC videos. At Dublin City University (DCU) each summary was judged by 3 humans with respect to how much of the ground truth was included and how well-formed the summary was. Additional objective measures included: how long it took the system to create the summary, how long it took the assessor to judge it against the ground truth, and what the summary's duration was. Assessor agreement on finding desired segments averaged 81%. Results indicated that while it was still difficult to exceed the performance of the baseline on including ground truth, the baseline was outperformed by most other systems with respect to avoiding redundancy/junk and presenting the summary with a pleasant tempo/rhythm.