Affect computing in film through sound energy dynamics
MULTIMEDIA '01 Proceedings of the ninth ACM international conference on Multimedia
Computational Media Aesthetics: Finding Meaning Beautiful
IEEE MultiMedia
Hierarchical movie affective content analysis based on arousal and valence features
MM '08 Proceedings of the 16th ACM international conference on Multimedia
Determination of nonprototypical valence and arousal in popular music: features and performances
EURASIP Journal on Audio, Speech, and Music Processing - Special issue on scalable audio-content analysis
Affective video content representation and modeling
IEEE Transactions on Multimedia
Affective Level Video Segmentation by Utilizing the Pleasure-Arousal-Dominance Information
IEEE Transactions on Multimedia
Affective understanding in film
IEEE Transactions on Circuits and Systems for Video Technology
Hi-index | 0.00 |
In the previous work of affective video analysis, supervised learning methods are frequently used as classifiers. However, labeling abundant examples is time consuming and even impossible for it needs annotation from human beings. While unlabeled video clips are easy to be obtained and they are adequate. In this paper, we present a semisupervised approach to recognize emotions from videos. Firstly, visual and audio features are extracted. Then bivariate correlation is used to select sensitive features. After that, low density separation, a semi-supervised learning algorithm, is adopted as the classifier. The comparative experiments on our own constructed database showed that the semi-supervised algorithm performs better than supervised one, illuminating the effectiveness and feasibility of our approach.