Using Model Trees for Classification
Machine Learning
Vocal communication of emotion: a review of research paradigms
Speech Communication - Special issue on speech and emotion
Negotiated Collusion: Modeling Social Language and its Relationship Effects in Intelligent Agents
User Modeling and User-Adapted Interaction
A study in responsiveness in spoken dialog
International Journal of Human-Computer Studies
User Modeling in Spoken Dialogue Systems to Generate Flexible Guidance
User Modeling and User-Adapted Interaction
Evaluating a realistic agent in an advice-giving task
International Journal of Human-Computer Studies
2005 Special Issue: Challenges in real-life emotion annotation and machine learning based detection
Neural Networks - Special issue: Emotion and brain
Prosodic alignment in human-computer interaction
Connection Science
Creating Rapport with Virtual Agents
IVA '07 Proceedings of the 7th international conference on Intelligent Virtual Agents
A Listening Agent Exhibiting Variable Behaviour
IVA '08 Proceedings of the 8th international conference on Intelligent Virtual Agents
Responding to Learners' Cognitive-Affective States with Supportive and Shakeup Dialogues
Proceedings of the 13th International Conference on Human-Computer Interaction. Part III: Ubiquitous and Intelligent Interaction
Are ECAs More Persuasive than Textual Messages?
IVA '09 Proceedings of the 9th International Conference on Intelligent Virtual Agents
Affective interaction: How emotional agents affect users
International Journal of Human-Computer Studies
Expressive robots in education: varying the degree of social supportive behavior of a robotic tutor
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Computer Speech and Language
Designing and evaluating a wizarded uncertainty-adaptive spoken dialogue tutoring system
Computer Speech and Language
Advances in Human-Computer Interaction - Special issue on emotion-aware natural interaction
Affect Detection: An Interdisciplinary Review of Models, Methods, and Their Applications
IEEE Transactions on Affective Computing
IVA'06 Proceedings of the 6th international conference on Intelligent Virtual Agents
Adapting to multiple affective states in spoken dialogue
SIGDIAL '12 Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Hi-index | 0.00 |
People in dialog use a rich set of nonverbal behaviors, including variations in the prosody of their utterances. Such behaviors, often emotion-related, call for appropriate responses, but today's spoken dialog systems lack the ability to do this. Recent work has shown how to recognize user emotions from prosody and how to express system-side emotions with prosody, but demonstrations of how to combine these functions to improve the user experience have been lacking. Working with a corpus of conversations with students about graduate school, we analyzed the emotional states of the interlocutors, utterance-by-utterance, using three dimensions: activation, evaluation, and power. We found that the emotional coloring of the speaker's utterance could be largely predicted from the emotion shown by her interlocutor in the immediately previous utterance. This finding enabled us to build Gracie, the first spoken dialog system that recognizes a user's emotional state from his or her speech and gives a response with appropriate emotional coloring. Evaluation with 36 subjects showed that they felt significantly more rapport with Gracie than with either of two controls. This shows that dialog systems can tap into this important level of interpersonal interaction using today's technology.