Rushes video summarization based on spatio-temporal features

  • Authors:
  • Tiago O. Cunha;Flávio G. H. de Souza;Arnaldo de A. Araújo;Gisele L. Pappa

  • Affiliations:
  • Universidade Federal de Minas Gerais (UFMG), Brazil;Universidade Federal de Minas Gerais (UFMG), Brazil;Universidade Federal de Minas Gerais (UFMG), Brazil;Universidade Federal de Minas Gerais (UFMG), Brazil

  • Venue:
  • Proceedings of the 27th Annual ACM Symposium on Applied Computing
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

The film making industry, together with ordinary-home users, are producing a record number of multimedia videos, generating a great demand for new methods to explore the content available in these videos. Here we focus in one methods for automatic rushes video summarization. Rushes consist of unedited material generated during the recording of a video film, and have characteristics not always found in standard videos: a high number of repetitions and a great number of the so called junk shots. To solve this problem, we propose an approach based on spatial and spatial-temporal features represented by a bags of visual features. This representation is robust to a series of transformations in image and occlusion. The task is modeled as an optimization problem, and a strategy inspired by the multiview learning technique is applied. Results on the BBC Rushes database were compared with the three best methods submitted to the TRECVID 2007, and showed the methodology to be promising for dynamic rushes video summarization.