Affective content detection using HMMs
MULTIMEDIA '03 Proceedings of the eleventh ACM international conference on Multimedia
Creating audio keywords for event detection in soccer video
ICME '03 Proceedings of the 2003 International Conference on Multimedia and Expo - Volume 1
Audio keywords generation for sports video analysis
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)
Affective video content representation and modeling
IEEE Transactions on Multimedia
Shot-boundary detection: unraveled and resolved?
IEEE Transactions on Circuits and Systems for Video Technology
Hi-index | 0.00 |
Movie affective content detection attracts ever-increasing research efforts. However, the affective content analysis is still a challenging task due to the gap between low-level perceptual features and high-level human perception of the media. Moreover, clues from multiple modalities should be considered for affective analysis, since they were used in movies to represent emotions and romance emotional atmosphere. In this paper, mid-level representations are generated from low-level features. These mid-level representations are from multiple modalities and used for affective content inference. Besides video shots which is commonly used for video content analysis, audio sounds, dialogue and subtitle are explored to contribute to detect affective content. Since affective analysis rely on movie genres, experiments are implemented in respective genres. The results shows that audio sounds, dialogues and subtitles are effective and efficient for affective content detection.