Towards a classification of text types: a repertory grid approach
International Journal of Man-Machine Studies
Multidimensional scaling of video surrogates
Journal of the American Society for Information Science and Technology
A user attention model for video summarization
Proceedings of the tenth ACM international conference on Multimedia
How fast is too fast?: evaluating fast forward surrogates for digital video
Proceedings of the 3rd ACM/IEEE-CS joint conference on Digital libraries
Deciphering visual gist and its implications for video retrieval and interface design
CHI '05 Extended Abstracts on Human Factors in Computing Systems
The Open Video Digital Library: A Möbius strip of research and practice
Journal of the American Society for Information Science and Technology
Human performance measures for video retrieval
MIR '06 Proceedings of the 8th ACM international workshop on Multimedia information retrieval
Video abstraction: A systematic review and classification
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)
The trecvid 2007 BBC rushes summarization evaluation pilot
Proceedings of the international workshop on TRECVID video summarization
An evaluation framework of user interaction with metadata surrogates
Journal of Information Science
Multimedia surrogates for video gisting: Toward combining spoken words and imagery
Information Processing and Management: an International Journal
Automated video program summarization using speech transcripts
IEEE Transactions on Multimedia
Hi-index | 0.00 |
This paper reports on a user-centered evaluation of visual video summaries. We evaluated four types of summaries (fastforward, user-controlled fastforward, scene clips and storyboard) with a set of existing performance and satisfaction measures. We further conducted a repertory grid elicitation with our participants gathering evaluation constructs related to both video summary content and controls. Results showed a lack of correlation between performance and satisfaction measures. User-supplied evaluation constructs were shown to span both the performance and satisfaction dimensions of the video summary evaluation space. Most constructs achieved moderate to good inter-rater agreement in a consequent survey.