Neural Computation
Automatic Analysis of Facial Expressions: The State of the Art
IEEE Transactions on Pattern Analysis and Machine Intelligence
Peekaboom: a game for locating objects in images
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions
IEEE Transactions on Pattern Analysis and Machine Intelligence
Brainwave-Based Imagery Analysis
Digital Human Modeling
Short-term emotion assessment in a recall paradigm
International Journal of Human-Computer Studies
MM '09 Proceedings of the 17th ACM international conference on Multimedia
Implicit human-centered tagging
ICME'09 Proceedings of the 2009 IEEE international conference on Multimedia and Expo
Implicit image tagging via facial information
Proceedings of the 2nd international workshop on Social signal processing
BI'10 Proceedings of the 2010 international conference on Brain informatics
DEAP: A Database for Emotion Analysis ;Using Physiological Signals
IEEE Transactions on Affective Computing
A Multimodal Database for Affect Recognition and Implicit Tagging
IEEE Transactions on Affective Computing
Robust Audio-Visual Speech Recognition Based on Late Integration
IEEE Transactions on Multimedia
Fast and robust fixed-point algorithms for independent component analysis
IEEE Transactions on Neural Networks
Editorial: Introduction To The Special Issue On Affect Analysis In Continuous Input
Image and Vision Computing
Hi-index | 0.00 |
The explosion of user-generated, untagged multimedia data in recent years, generates a strong need for efficient search and retrieval of this data. The predominant method for content-based tagging is through slow, labor-intensive manual annotation. Consequently, automatic tagging is currently a subject of intensive research. However, it is clear that the process will not be fully automated in the foreseeable future. We propose to involve the user and investigate methods for implicit tagging, wherein users' responses to the interaction with the multimedia content are analyzed in order to generate descriptive tags. Here, we present a multi-modal approach that analyses both facial expressions and electroencephalography (EEG) signals for the generation of affective tags. We perform classification and regression in the valence-arousal space and present results for both feature-level and decision-level fusion. We demonstrate improvement in the results when using both modalities, suggesting the modalities contain complementary information.