Recognizing Human Facial Expressions From Long Image Sequences Using Optical Flow
IEEE Transactions on Pattern Analysis and Machine Intelligence
Automatic detection of learner's affect from conversational cues
User Modeling and User-Adapted Interaction
Affect Detection: An Interdisciplinary Review of Models, Methods, and Their Applications
IEEE Transactions on Affective Computing
Affect detection from multichannel physiology during learning sessions with AutoTutor
AIED'11 Proceedings of the 15th international conference on Artificial intelligence in education
New Perspectives on Affect and Learning Technologies
New Perspectives on Affect and Learning Technologies
The impact of system feedback on learners' affective and physiological states
ITS'10 Proceedings of the 10th international conference on Intelligent Tutoring Systems - Volume Part I
AutoTutor: an intelligent tutoring system with mixed-initiative dialogue
IEEE Transactions on Education
Combining classifiers in multimodal affect detection
AusDM '12 Proceedings of the Tenth Australasian Data Mining Conference - Volume 134
Hi-index | 0.00 |
Learners experience a variety of emotions during learning sessions with Intelligent Tutoring Systems (ITS). The research community is building systems that are aware of these experiences, generally represented as a category or as a point in a low-dimensional space. State-of-the-art systems detect these affective states from multimodal data, in naturalistic scenarios. This paper provides evidence of how the choice of representation affects the quality of the detection system. We present a user-independent model for detecting learners' affective states from video and physiological signals using both the categorical and dimensional representations. Machine learning techniques are used for selecting the best subset of features and classifying the various degrees of emotions for both representations. We provide evidence that dimensional representation, particularly using valence, produces higher accuracy.