The measurement of end-user computing satisfaction
MIS Quarterly
Proceedings of HCI International (the 8th International Conference on Human-Computer Interaction) on Human-Computer Interaction: Ergonomics and User Interfaces-Volume I - Volume I
The production and recognition of emotions in speech: features and algorithms
International Journal of Human-Computer Studies - Application of affective computing in humanComputer interaction
Towards a tool for the Subjective Assessment of Speech System Interfaces (SASSI)
Natural Language Engineering
Discovering Statistics Using SPSS
Discovering Statistics Using SPSS
Automatic detection of learner's affect from conversational cues
User Modeling and User-Adapted Interaction
Diagnosing and acting on student affect: the tutor's perspective
User Modeling and User-Adapted Interaction
Relations between de-facto criteria in the evaluation of a spoken dialogue system
Speech Communication
Automatic Classification of Expressiveness in Speech: A Multi-corpus Study
Speaker Classification II
On the Use of Kappa Coefficients to Measure the Reliability of the Annotation of Non-acted Emotions
PIT '08 Proceedings of the 4th IEEE tutorial and research workshop on Perception and Interactive Technologies for Speech-Based Systems: Perception in Multimodal Dialogue Systems
Social correlates of turn-taking behavior
ICASSP '09 Proceedings of the 2009 IEEE International Conference on Acoustics, Speech and Signal Processing
Using linguistic cues for the automatic recognition of personality in conversation and text
Journal of Artificial Intelligence Research
Modeling user satisfaction with Hidden Markov Model
SIGDIAL '09 Proceedings of the SIGDIAL 2009 Conference: The 10th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Computer Speech and Language
Computer Speech and Language
Designing and evaluating a wizarded uncertainty-adaptive spoken dialogue tutoring system
Computer Speech and Language
DEXA '10 Proceedings of the 2010 Workshops on Database and Expert Systems Applications
Quality of Telephone-Based Spoken Dialogue Systems
Quality of Telephone-Based Spoken Dialogue Systems
The role of voice quality and prosodic contour in affective speech perception
Speech Communication
Grounding emotions in human-machine conversational systems
INTETAIN'05 Proceedings of the First international conference on Intelligent Technologies for Interactive Entertainment
NEMOHIFI: an affective HiFi agent
Proceedings of the 15th ACM on International conference on multimodal interaction
Hi-index | 0.00 |
Detecting user affect automatically during real-time conversation is the main challenge towards our greater aim of infusing social intelligence into a natural-language mixed-initiative High-Fidelity (Hi-Fi) audio control spoken dialog agent. In recent years, studies on affect detection from voice have moved on to using realistic, non-acted data, which is subtler. However, it is more challenging to perceive subtler emotions and this is demonstrated in tasks such as labeling and machine prediction. This paper attempts to address part of this challenge by considering the role of user satisfaction ratings and also conversational/dialog features in discriminating contentment and frustration, two types of emotions that are known to be prevalent within spoken human-computer interaction. However, given the laboratory constraints, users might be positively biased when rating the system, indirectly making the reliability of the satisfaction data questionable. Machine learning experiments were conducted on two datasets, users and annotators, which were then compared in order to assess the reliability of these datasets. Our results indicated that standard classifiers were significantly more successful in discriminating the abovementioned emotions and their intensities (reflected by user satisfaction ratings) from annotator data than from user data. These results corroborated that: first, satisfaction data could be used directly as an alternative target variable to model affect, and that they could be predicted exclusively by dialog features. Second, these were only true when trying to predict the abovementioned emotions using annotator's data, suggesting that user bias does exist in a laboratory-led evaluation.