Collecting commonsense experiences
Proceedings of the 2nd international conference on Knowledge capture
Collecting commonsense experiences
Proceedings of the 2nd international conference on Knowledge capture
Active capture: automatic direction for automatic movies
MULTIMEDIA '03 Proceedings of the eleventh ACM international conference on Multimedia
The mindful camera: common sense for documentary videography
MULTIMEDIA '03 Proceedings of the eleventh ACM international conference on Multimedia
Learning on location with cinematic narratives
Proceedings of the 1st ACM workshop on Story representation, mechanism and context
Director in your pocket: holistic help for the hapless home videographer
Proceedings of the 12th annual ACM international conference on Multimedia
Media Fabric — A Process-Oriented Approach to Media Creation and Exchange
BT Technology Journal
IMCE: Integrated media creation environment
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)
Dynamic shot suggestion filtering for home video based on user performance
Proceedings of the 13th annual ACM international conference on Multimedia
Situated event bootstrapping and capture guidance for automated home movie authoring
Proceedings of the 13th annual ACM international conference on Multimedia
Modeling Intent for Home Video Repurposing
IEEE MultiMedia
Learning rich semantics from news video archives by style analysis
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)
Raconteur: from intent to stories
Proceedings of the 15th international conference on Intelligent user interfaces
Hi-index | 0.00 |
This paper introduces a model for producing common sense metadata during video capture and describes how this technique can have a positive impact on content capture, representation, and presentation. Metadata entered into the system at the moment of capture is used to generate suggestions designed to help the videographer decide what to shoot, how to compose a shot and how to index their video material to best support their communication requirements. An approach and first experiments using a common sense database and reasoning techniques to support a partnership between the camera and videographer during video capture are presented.