Key frame preview techniques for video browsing
Proceedings of the third ACM conference on Digital libraries
Video Manga: generating semantically meaningful video summaries
MULTIMEDIA '99 Proceedings of the seventh ACM international conference on Multimedia (Part 1)
Auto-summarization of audio-video presentations
MULTIMEDIA '99 Proceedings of the seventh ACM international conference on Multimedia (Part 1)
Dynamic video summarization and visualization
MULTIMEDIA '99 Proceedings of the seventh ACM international conference on Multimedia (Part 2)
An interactive comic book presentation for exploring video
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Automatically extracting highlights for TV Baseball programs
MULTIMEDIA '00 Proceedings of the eighth ACM international conference on Multimedia
Rule-based video classification system for basketball video indexing
MULTIMEDIA '00 Proceedings of the 2000 ACM workshops on Multimedia
A utility framework for the automatic generation of audio-visual skims
Proceedings of the tenth ACM international conference on Multimedia
CVPR '97 Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97)
Dynamic key frame presentation techniques for augmenting video browsing
AVI '98 Proceedings of the working conference on Advanced visual interfaces
Fusion of AV features and external information sources for event detection in team sports video
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)
Live sports event detection based on broadcast video and web-casting text
MULTIMEDIA '06 Proceedings of the 14th annual ACM international conference on Multimedia
Summarizing video using a shot importance measure and a frame-packing algorithm
ICASSP '99 Proceedings of the Acoustics, Speech, and Signal Processing, 1999. on 1999 IEEE International Conference - Volume 06
Using MPEG-7 and MPEG-21 for Personalizing Video
IEEE MultiMedia
Video personalization and summarization system for usage environment
Journal of Visual Communication and Image Representation
Automatic video summarization of sports videos using metadata
PCM'04 Proceedings of the 5th Pacific Rim Conference on Advances in Multimedia Information Processing - Volume Part II
Two-stage hierarchical video summary extraction to match low-level user browsing preferences
IEEE Transactions on Multimedia
Personalized abstraction of broadcasted American football video by highlight selection
IEEE Transactions on Multimedia
Automatic video summarizing tool using MPEG-7 descriptors for personal video recorder
IEEE Transactions on Consumer Electronics
Automatic soccer video analysis and summarization
IEEE Transactions on Image Processing
Video summarization and scene detection by graph modeling
IEEE Transactions on Circuits and Systems for Video Technology
Multimedia Tools and Applications
MediaDiver: viewing and annotating multi-view video
CHI '11 Extended Abstracts on Human Factors in Computing Systems
Location-aware system based on a dynamic 3D model to help in live broadcasting of sport events
MM '11 Proceedings of the 19th ACM international conference on Multimedia
Discrimination of media moments and media intervals: sticker-based watch-and-comment annotation
Multimedia Tools and Applications
Near-lossless semantic video summarization and its applications to video analysis
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)
Features extraction for soccer video semantic analysis: current achievements and remaining issues
Artificial Intelligence Review
Hi-index | 0.00 |
Video abstraction is defined as creating a video abstract which includes only important information in the original video streams. There are two general types of video abstracts, namely the dynamic and static ones. The dynamic video abstract is a 3-dimensional representation created by temporally arranging important scenes while the static video abstract is a 2-dimensional representation created by spatially arranging only keyframes of important scenes. In this paper, we propose a unified method of automatically creating these two types of video abstracts considering the semantic content targeting especially on broadcasted sports videos. For both types of video abstracts, the proposed method firstly determines the significance of scenes. A play scene, which corresponds to a play, is considered as a scene unit of sports videos, and the significance of every play scene is determined based on the play ranks, the time the play occurred, and the number of replays. This information is extracted from the metadata, which describes the semantic content of videos and enables us to consider not only the types of plays but also their influence on the game. In addition, user's preferences are considered to personalize the video abstracts. For dynamic video abstracts, we propose three approaches for selecting the play scenes of the highest significance: the basic criterion, the greedy criterion, and the play-cut criterion. For static video abstracts, we also propose an effective display style where a user can easily access target scenes from a list of keyframes by tracing the tree structures of sports games. We experimentally verified the effectiveness of our method by comparing our results with man-made video abstracts as well as by conducting questionnaires.