Predicting hyperarticulate speech during human-computer error resolution
Speech Communication
On the use of prosody in automatic dialogue understanding
Speech Communication - Dialogue and prosody
Prosody in Speech Understanding Systems
Prosody in Speech Understanding Systems
Proceedings of HCI International (the 8th International Conference on Human-Computer Interaction) on Human-Computer Interaction: Ergonomics and User Interfaces-Volume I - Volume I
NAACL 2000 Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference
Characterizing and recognizing spoken corrections in human-computer dialogue
COLING '98 Proceedings of the 17th international conference on Computational linguistics - Volume 1
Describing the emotional states that are expressed in speech
Speech Communication - Special issue on speech and emotion
Affect: from information to interaction
Proceedings of the 4th decennial conference on Critical computing: between sense and sensibility
2005 Special Issue: Beyond emotion archetypes: Databases for emotion modelling using neural networks
Neural Networks - Special issue: Emotion and brain
2005 Special Issue: Challenges in real-life emotion annotation and machine learning based detection
Neural Networks - Special issue: Emotion and brain
ASR for emotional speech: Clarifying the issues and enhancing performance
Neural Networks - Special issue: Emotion and brain
Characterizing and Predicting Corrections in Spoken Dialogue Systems
Computational Linguistics
User modeling and adaptation in health promotion dialogs with an animated character
Journal of Biomedical Informatics - Special issue: Dialog systems for health communications
Predicting student emotions in computer-human tutoring dialogues
ACL '04 Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics
HLT-NAACL '06 Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics
How emotion is made and measured
International Journal of Human-Computer Studies
Ensemble methods for spoken emotion recognition in call-centres
Speech Communication
Voting ensembles for spoken affect classification
Journal of Network and Computer Applications
A survey of affect recognition methods: audio, visual and spontaneous expressions
Proceedings of the 9th international conference on Multimodal interfaces
Incremental learning for spoken affect classification and its application in call-centres
International Journal of Intelligent Systems Technologies and Applications
Personal and Ubiquitous Computing
Fear-type emotion recognition for future audio-based surveillance systems
Speech Communication
Speaker Classification I
Emotions in Speech: Juristic Implications
Speaker Classification I
Real-Life Emotion Recognition in Speech
Speaker Classification II
Automatic Classification of Expressiveness in Speech: A Multi-corpus Study
Speaker Classification II
`O Francesca, ma che sei grulla?' Emotions and Irony in Persuasion Dialogues
AI*IA '07 Proceedings of the 10th Congress of the Italian Association for Artificial Intelligence on AI*IA 2007: Artificial Intelligence and Human-Oriented Computing
ACII '07 Proceedings of the 2nd international conference on Affective Computing and Intelligent Interaction
'You are Sooo Cool, Valentina!' Recognizing Social Attitude in Speech-Based Dialogues with an ECA
ACII '07 Proceedings of the 2nd international conference on Affective Computing and Intelligent Interaction
ACII '07 Proceedings of the 2nd international conference on Affective Computing and Intelligent Interaction
Investigating Human Tutor Responses to Student Uncertainty for Adaptive System Development
ACII '07 Proceedings of the 2nd international conference on Affective Computing and Intelligent Interaction
Affect and Emotion in Human-Computer Interaction
Quantification of Segmentation and F0 Errors and Their Effect on Emotion Recognition
TSD '08 Proceedings of the 11th international conference on Text, Speech and Dialogue
A three-layered model for expressive speech perception
Speech Communication
Speech Emotion Perception by Human and Machine
Verbal and Nonverbal Features of Human-Human and Human-Machine Interaction
PEAKS - A system for the automatic evaluation of voice and speech disorders
Speech Communication
Natural Language Engineering
Emotion classification using massive examples extracted from the web
COLING '08 Proceedings of the 22nd International Conference on Computational Linguistics - Volume 1
Audio-Based Emotion Recognition in Judicial Domain: A Multilayer Support Vector Machines Approach
MLDM '09 Proceedings of the 6th International Conference on Machine Learning and Data Mining in Pattern Recognition
ITSPOKE: an intelligent tutoring spoken dialogue system
HLT-NAACL--Demonstrations '04 Demonstration Papers at HLT-NAACL 2004
Computer Speech and Language
Intentional affect: an alternative notion of affective interaction with a machine
Proceedings of the 23rd British HCI Group Annual Conference on People and Computers: Celebrating People and Technology
Analysis of nonverbal involvement in dyadic interactions
COST 2102'07 Proceedings of the 2007 COST action 2102 international conference on Verbal and nonverbal communication behaviours
Emotion recognition and conversion for mandarin speech
FSKD'09 Proceedings of the 6th international conference on Fuzzy systems and knowledge discovery - Volume 1
Computer Speech and Language
Designing and evaluating a wizarded uncertainty-adaptive spoken dialogue tutoring system
Computer Speech and Language
Human-Computer Interaction in Estonian: Collection and Analysis of Simulated Dialogues
Proceedings of the 2010 conference on Human Language Technologies -- The Baltic Perspective: Proceedings of the Fourth International Conference Baltic HLT 2010
Advances in Human-Computer Interaction - Special issue on emotion-aware natural interaction
Enhancing emotion recognition from speech through feature selection
TSD'10 Proceedings of the 13th international conference on Text, speech and dialogue
Audio-visual spontaneous emotion recognition
ICMI'06/IJCAI'07 Proceedings of the ICMI 2006 and IJCAI 2007 international conference on Artifical intelligence for human computing
F2 -- new technique for recognition of user emotional states in spoken dialogue systems
SIGDIAL '10 Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue
On the impact of children's emotional speech on acoustic and language models
EURASIP Journal on Audio, Speech, and Music Processing - Special issue on atypical speech
Formant position based weighted spectral features for emotion recognition
Speech Communication
Emotional states in judicial courtrooms: An experimental investigation
Speech Communication
Quality of experience evaluation of voice communication systems using affect-based approach
MM '11 Proceedings of the 19th ACM international conference on Multimedia
ACII'05 Proceedings of the First international conference on Affective Computing and Intelligent Interaction
An enhanced speech emotion recognition system based on discourse information
ICCS'06 Proceedings of the 6th international conference on Computational Science - Volume Part I
Piecing together the emotion jigsaw
MLMI'04 Proceedings of the First international conference on Machine Learning for Multimodal Interaction
SIGDIAL '11 Proceedings of the SIGDIAL 2011 Conference
Are you looking at me, are you talking with me: multimodal classification of the focus of attention
TSD'06 Proceedings of the 9th international conference on Text, Speech and Dialogue
Audible smiles and frowns affect speech comprehension
Speech Communication
COST'11 Proceedings of the 2011 international conference on Cognitive Behavioural Systems
Semi-supervised methods for exploring the acoustics of simple productive feedback
Speech Communication
Hi-index | 0.00 |
Automatic dialogue systems used, for instance, in call centers, should be able to determine in a critical phase of the dialogue-indicated by the customers vocal expression of anger/irritation-when it is better to pass over to a human operator. At a first glance, this does not seem to be a complicated task: It is reported in the literature that emotions can be told apart quite reliably on the basis of prosodic features. However, these results are achieved most of the time in a laboratory setting, with experienced speakers (actors), and with elicited, controlled speech. We compare classification results obtained with the same feature set for elicited speech and for a Wizard-of-Oz scenario, where users believe that they are really communicating with an automatic dialogue system. It turns out that the closer we get to a realistic scenario, the less reliable is prosody as an indicator of the speakers' emotional state. As a consequence, we propose to change the target such that we cease looking for traces of particular emotions in the users' speech, but instead look for indicators of TROUBLE IN COMMUNICATION. For this reason, we propose the module Monitoring of User State [especially of] Emotion (MOUSE) in which a prosodic classifier is combined with other knowledge sources, such as conversationally peculiar linguistic behavior, for example, the use of repetitions. For this module, preliminary experimental results are reported showing a more adequate modelling of TROUBLE IN COMMUNICATION.