Automatic summarization of rushes video using bipartite graphs

  • Authors:
  • Liang Bai;Yanli Hu;Songyang Lao;Alan F. Smeaton;Noel E. O'Connor

  • Affiliations:
  • CLARITY: Centre for Sensor Web Technologies, Dublin City University, Dublin 9, Ireland;School of Information System & Management, National Univ. of Defense Technology, ChangSha, People's Republic of China 410073;School of Information System & Management, National Univ. of Defense Technology, ChangSha, People's Republic of China 410073;CLARITY: Centre for Sensor Web Technologies, Dublin City University, Dublin 9, Ireland;CLARITY: Centre for Sensor Web Technologies, Dublin City University, Dublin 9, Ireland

  • Venue:
  • Multimedia Tools and Applications
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper we present a new approach for automatic summarization of rushes, or unstructured video. Our approach is composed of three major steps. First, based on shot and sub-shot segmentations, we filter sub-shots with low information content not likely to be useful in a summary. Second, a method using maximal matching in a bipartite graph is adapted to measure similarity between the remaining shots and to minimize inter-shot redundancy by removing repetitive retake shots common in rushes video. Finally, the presence of faces and motion intensity are characterised in each sub-shot. A measure of how representative the sub-shot is in the context of the overall video is then proposed. Video summaries composed of keyframe slideshows are then generated. In order to evaluate the effectiveness of this approach we re-run the evaluation carried out by TRECVid, using the same dataset and evaluation metrics used in the TRECVid video summarization task in 2007 but with our own assessors. Results show that our approach leads to a significant improvement on our own work in terms of the fraction of the TRECVid summary ground truth included and is competitive with the best of other approaches in TRECVid 2007.