Spontaneous vs. posed facial behavior: automatic analysis of brow actions
Proceedings of the 8th international conference on Multimodal interfaces
How to distinguish posed from spontaneous smiles using geometric features
Proceedings of the 9th international conference on Multimodal interfaces
Annotation suggestion and search for personal multimedia objects on the web
CIVR '08 Proceedings of the 2008 international conference on Content-based image and video retrieval
Affective feedback: an investigation into the role of emotions in the information seeking process
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
Affective ranking of movie scenes using physiological signals and content analysis
MS '08 Proceedings of the 2nd ACM workshop on Multimedia semantics
A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions
IEEE Transactions on Pattern Analysis and Machine Intelligence
ISM '08 Proceedings of the 2008 Tenth IEEE International Symposium on Multimedia
Implicit emotional tagging of multimedia using EEG signals and brain computer interface
WSM '09 Proceedings of the first SIGMM workshop on Social media
Proceedings of the ACM International Conference on Image and Video Retrieval
Exploiting facial expressions for affective video summarisation
Proceedings of the ACM International Conference on Image and Video Retrieval
Static vs. dynamic modeling of human nonverbal behavior from multiple cues and modalities
Proceedings of the 2009 international conference on Multimodal interfaces
Queries and tags in affect-based multimedia retrieval
ICME'09 Proceedings of the 2009 IEEE international conference on Multimedia and Expo
Probabilistic expression analysis on manifolds
CVPR'04 Proceedings of the 2004 IEEE computer society conference on Computer vision and pattern recognition
Particle filtering with factorized likelihoods for tracking facial features
FGR' 04 Proceedings of the Sixth IEEE international conference on Automatic face and gesture recognition
Facial action recognition for facial expression analysis from static face images
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Fusion of facial expressions and EEG for implicit affective tagging
Image and Vision Computing
Human behavior sensing for tag relevance assessment
Proceedings of the 21st ACM international conference on Multimedia
Hi-index | 0.00 |
Implicit Tagging is the technique to annotate multimedia data based on user's spontaneous nonverbal reactions. In this paper, a study is conducted to test whether user's facial expression can be used to predict the correctness of tags of images. The basic assumption behind this study is that users are likely to display certain kind of emotion due to the correctness of tags. The dataset used in this paper is users' frontal face video collected during an implicit tagging experiment, in which participants were presented with tagged images and their facial reactions when viewing these images were recorded. Based on this dataset, facial points in video sequences are tracked by a facial point tracker. Geometric features are calculated from the positions of facial points to represent each video as a sequence of feature vectors, and Hidden Markov Models (HMM) are used to classify this information in terms of behavior typical for viewing a correctly or an incorrectly tagged image. Experimental results show that user's facial expression can be used to help judge the correctness of tags. The proposed is effective in case of 16 out of 27 participants, the highest prediction accuracy for a single participant being 72.1%, and the highest overall accuracy being 77.98%.