Affective content detection using HMMs
MULTIMEDIA '03 Proceedings of the eleventh ACM international conference on Multimedia
The long tail of recommender systems and how to leverage it
Proceedings of the 2008 ACM conference on Recommender systems
MIR '08 Proceedings of the 1st ACM international conference on Multimedia information retrieval
Affective understanding in film
IEEE Transactions on Circuits and Systems for Video Technology
Generating story variants with constrained video recombination
MM '11 Proceedings of the 19th ACM international conference on Multimedia
Hi-index | 0.00 |
Tags are a useful tool for film search and recommendation because they contain valuable information in a compressed form. However, manual tagging is time-consuming and needs the involvement of many people. We are working on automated cross-modal video classification algorithms based on signal processing. These algorithms assign subcategories to film scenes. The subcategories are useful to cluster scenes in different groups and therefore can also be used as tags. In this paper, we point out the advantages of tags generated by automated video classification and present the determination of five possible categories with their associated subcategories. The determination was obtained with five group studies to ensure that the categories fit the way humans classify. Because they are based on the audiovisual content and do not include additional external information like name of director or year of production, they are suited for automated classification as well. The final categories are called Dynamic, Valence, Interaction, Suspense and Essential Features. They contain between two and nine subcategories.