How might people interact with agents
Communications of the ACM
Affective computing
Looking at People: Sensing for Ubiquitous and Wearable Computing
IEEE Transactions on Pattern Analysis and Machine Intelligence
Toward Machine Emotional Intelligence: Analysis of Affective Physiological State
IEEE Transactions on Pattern Analysis and Machine Intelligence - Graph Algorithms and Computer Vision
Mind Bugs: The Origins of Procedural Misconceptions
Mind Bugs: The Origins of Procedural Misconceptions
Machine Learning
User Modeling and User-Adapted Interaction
Andes: A Coached Problem Solving Environment for Physics
ITS '00 Proceedings of the 5th International Conference on Intelligent Tutoring Systems
Informing the Detection of the Students' Motivational State: An Empirical Study
ITS '02 Proceedings of the 6th International Conference on Intelligent Tutoring Systems
Vocal communication of emotion: a review of research paradigms
Speech Communication - Special issue on speech and emotion
Multimodal Human Emotion/Expression Recognition
FG '98 Proceedings of the 3rd. International Conference on Face & Gesture Recognition
ICALT '01 Proceedings of the IEEE International Conference on Advanced Learning Technologies
Designing the User Interface: Strategies for Effective Human-Computer Interaction (4th Edition)
Designing the User Interface: Strategies for Effective Human-Computer Interaction (4th Edition)
Multimodal affect recognition in learning environments
Proceedings of the 13th annual ACM international conference on Multimedia
Audio-visual emotion recognition in adult attachment interview
Proceedings of the 8th international conference on Multimodal interfaces
Modeling naturalistic affective states via facial and vocal expressions recognition
Proceedings of the 8th international conference on Multimodal interfaces
Predicting student emotions in computer-human tutoring dialogues
ACL '04 Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics
Automatic prediction of frustration
International Journal of Human-Computer Studies
Multimodal human-computer interaction: A survey
Computer Vision and Image Understanding
Toward an Affect-Sensitive AutoTutor
IEEE Intelligent Systems
Gender-Specific Approaches to Developing Emotionally Intelligent Learning Companions
IEEE Intelligent Systems
Automatic detection of learner's affect from conversational cues
User Modeling and User-Adapted Interaction
Diagnosing and acting on student affect: the tutor's perspective
User Modeling and User-Adapted Interaction
Modeling self-efficacy in intelligent tutoring systems: An inductive approach
User Modeling and User-Adapted Interaction
The relative impact of student affect on performance models in a spoken dialogue tutoring system
User Modeling and User-Adapted Interaction
Proceedings of the 10th international ACM SIGACCESS conference on Computers and accessibility
Proceedings of the 2nd international conference on Affective Computing and Intelligent Interaction
ACII '07 Proceedings of the 2nd international conference on Affective Computing and Intelligent Interaction
The Andes Physics Tutoring System: Lessons Learned
International Journal of Artificial Intelligence in Education
International Journal of Artificial Intelligence in Education
A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions
IEEE Transactions on Pattern Analysis and Machine Intelligence
Automatic Detection of Learner's Affect From Gross Body Language
Applied Artificial Intelligence
Empirically building and evaluating a probabilistic model of user affect
User Modeling and User-Adapted Interaction
Multimethod assessment of affective experience and expression during deep learning
International Journal of Learning Technology
Cohesion Relationships in Tutorial Dialogue as Predictors of Affective States
Proceedings of the 2009 conference on Artificial Intelligence in Education: Building Learning Systems that Care: From Knowledge Representation to Affective Modelling
Proceedings of the 2009 conference on Artificial Intelligence in Education: Building Learning Systems that Care: From Knowledge Representation to Affective Modelling
International Journal of Human-Computer Studies
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
AutoTutor: an intelligent tutoring system with mixed-initiative dialogue
IEEE Transactions on Education
User Modeling and User-Adapted Interaction
Modeling confusion: facial expression, task, and discourse in task-oriented tutorial dialogue
AIED'11 Proceedings of the 15th international conference on Artificial intelligence in education
Affect detection from multichannel physiology during learning sessions with AutoTutor
AIED'11 Proceedings of the 15th international conference on Artificial intelligence in education
Multimodal affect detection from physiological and facial features during ITS interaction
AIED'11 Proceedings of the 15th international conference on Artificial intelligence in education
Predicting facial indicators of confusion with hidden Markov models
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part I
Modeling learner affect with theoretically grounded dynamic bayesian networks
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part I
Hybrid fusion approach for detecting affects from multichannel physiology
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part I
Investigating acoustic cues in automatic detection of learners' emotion from auto tutor
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part II
Generalizing models of student affect in game-based learning environments
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part II
An imaginary friend that connects with the user's emotions
Proceedings of the 8th International Conference on Advances in Computer Entertainment Technology
Gaze tutor: A gaze-reactive intelligent tutoring system
International Journal of Human-Computer Studies
Core aspects of affective metacognitive user models
UMAP'11 Proceedings of the 19th international conference on Advances in User Modeling
Fifteen years of constraint-based tutors: what we have achieved and where we are going
User Modeling and User-Adapted Interaction
Monitoring affect states during effortful problem solving activities
International Journal of Artificial Intelligence in Education
ITS'12 Proceedings of the 11th international conference on Intelligent Tutoring Systems
Emotion recognition in texts for user model augmenting
Proceedings of the 13th International Conference on Interacción Persona-Ordenador
Proceedings of the 14th ACM international conference on Multimodal interaction
ACM Transactions on Interactive Intelligent Systems (TiiS) - Special issue on highlights of the decade in interactive intelligent systems
A dynamic approach for detecting naturalistic affective states from facial videos during HCI
AI'12 Proceedings of the 25th Australasian joint conference on Advances in Artificial Intelligence
Proceedings of the 2013 international conference on Intelligent user interfaces
Recognizing Student Emotions using Brainwaves and Mouse Behavior Data
International Journal of Distance Education Technologies
ACM SIGAPP Applied Computing Review
Hi-index | 0.02 |
We developed and evaluated a multimodal affect detector that combines conversational cues, gross body language, and facial features. The multimodal affect detector uses feature-level fusion to combine the sensory channels and linear discriminant analyses to discriminate between naturally occurring experiences of boredom, engagement/flow, confusion, frustration, delight, and neutral. Training and validation data for the affect detector were collected in a study where 28 learners completed a 32- min. tutorial session with AutoTutor, an intelligent tutoring system with conversational dialogue. Classification results supported a channel 脳 judgment type interaction, where the face was the most diagnostic channel for spontaneous affect judgments (i.e., at any time in the tutorial session), while conversational cues were superior for fixed judgments (i.e., every 20 s in the session). The analyses also indicated that the accuracy of the multichannel model (face, dialogue, and posture) was statistically higher than the best single-channel model for the fixed but not spontaneous affect expressions. However, multichannel models reduced the discrepancy (i.e., variance in the precision of the different emotions) of the discriminant models for both judgment types. The results also indicated that the combination of channels yielded superadditive effects for some affective states, but additive, redundant, and inhibitory effects for others. We explore the structure of the multimodal linear discriminant models and discuss the implications of some of our major findings.