Automatic text recognition for video indexing
MULTIMEDIA '96 Proceedings of the fourth ACM international conference on Multimedia
Multimodal Video Indexing: A Review of the State-of-the-art
Multimedia Tools and Applications
On the role of user-generated metadata in audio visual collections
Proceedings of the sixth international conference on Knowledge capture
Towards a pattern science for the Semantic Web
Semantic Web
Proceedings of the second international ACM workshop on Personalized access to cultural heritage
Feeding the second screen: semantic linking based on subtitles
Proceedings of the 10th Conference on Open Research Areas in Information Retrieval
Hi-index | 0.00 |
Video content analysis and named entity extraction are increasingly used to automatically generate content annotations for TV programs. A potential use of these annotations is to provide an entry point to background information that users can consume on a second screen. Automatic enrichments are, however, meaningless when it is unclear to the user what they can do with them and why they would. We propose to contextualize the annotations by an explicit representation of discourse in the form of scene templates. Through content rules these templates are populated with the relevant annotations. We illustrate this idea with an example video and annotations generated in the LinkedTV1 project.