A Computational Approach to Edge Detection
IEEE Transactions on Pattern Analysis and Machine Intelligence
A user attention model for video summarization
Proceedings of the tenth ACM international conference on Multimedia
Evaluation campaigns and TRECVid
MIR '06 Proceedings of the 8th ACM international workshop on Multimedia information retrieval
Fast tracking of near-duplicate keyframes in broadcast domain with transitivity propagation
MULTIMEDIA '06 Proceedings of the 14th annual ACM international conference on Multimedia
The trecvid 2007 BBC rushes summarization evaluation pilot
Proceedings of the international workshop on TRECVID video summarization
A user-centered approach to rushes summarisation via highlight-detected keyframes
Proceedings of the international workshop on TRECVID video summarization
The trecvid 2008 BBC rushes summarization evaluation
TVS '08 Proceedings of the 2nd ACM TRECVid Video Summarization Workshop
Video shot boundary detection: Seven years of TRECVid activity
Computer Vision and Image Understanding
Two-stage hierarchical video summary extraction to match low-level user browsing preferences
IEEE Transactions on Multimedia
Automated video program summarization using speech transcripts
IEEE Transactions on Multimedia
A Bayesian discriminating features method for face detection
IEEE Transactions on Pattern Analysis and Machine Intelligence
Hi-index | 0.00 |
In this paper we present a new approach for automatic summarization of rushes, or unstructured video. Our approach is composed of three major steps. First, based on shot and sub-shot segmentations, we filter sub-shots with low information content not likely to be useful in a summary. Second, a method using maximal matching in a bipartite graph is adapted to measure similarity between the remaining shots and to minimize inter-shot redundancy by removing repetitive retake shots common in rushes video. Finally, the presence of faces and motion intensity are characterised in each sub-shot. A measure of how representative the sub-shot is in the context of the overall video is then proposed. Video summaries composed of keyframe slideshows are then generated. In order to evaluate the effectiveness of this approach we re-run the evaluation carried out by TRECVid, using the same dataset and evaluation metrics used in the TRECVid video summarization task in 2007 but with our own assessors. Results show that our approach leads to a significant improvement on our own work in terms of the fraction of the TRECVid summary ground truth included and is competitive with the best of other approaches in TRECVid 2007.