A user attention model for video summarization
Proceedings of the tenth ACM international conference on Multimedia
Fast tracking of near-duplicate keyframes in broadcast domain with transitivity propagation
MULTIMEDIA '06 Proceedings of the 14th annual ACM international conference on Multimedia
The trecvid 2007 BBC rushes summarization evaluation pilot
Proceedings of the international workshop on TRECVID video summarization
A user-centered approach to rushes summarisation via highlight-detected keyframes
Proceedings of the international workshop on TRECVID video summarization
Two-stage hierarchical video summary extraction to match low-level user browsing preferences
IEEE Transactions on Multimedia
Automated video program summarization using speech transcripts
IEEE Transactions on Multimedia
A Bayesian discriminating features method for face detection
IEEE Transactions on Pattern Analysis and Machine Intelligence
Automatic evaluation of video summaries
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)
Hi-index | 0.00 |
In this paper we present a new approach for automatic summarization of rushes video. Our approach is composed of three main steps. First, based on a temporal segmentation, we filter sub-shots with low information content not likely to be useful in a summary. Second, a method using maximal matching in a bipartite graph is adapted to measure similarity between the remaining shots and to minimize inter-shot redundancy by removing repetitive retake shots common in rushes content. Finally, the presence of faces and the motion in-tensity are characterised in each sub-shot. A measure of how representative the sub-shot is in the context of the overall video is then proposed. Video summaries composed of keyframe slideshows are then generated. In order to evaluate the effectiveness of this approach we re-run the evaluation carried out by the TREC, using the same dataset and evaluation metrics used in the TRECVID video summarization task in 2007 but with our own assessors. Results show that our approach leads to a significant improvement in terms of the fraction of the TRECVID summary ground truth included and is competitive with other approaches in TRECVID 2007.