Dialogue act modeling for automatic tagging and recognition of conversational speech
Computational Linguistics
The reliability of a dialogue structure coding scheme
Computational Linguistics
Automatic prediction of frustration
International Journal of Human-Computer Studies
Combining lexical, syntactic and prosodic cues for improved online dialog act tagging
Computer Speech and Language
Affect-aware tutors: recognising and responding to student affect
International Journal of Learning Technology
The modulation of cooperation and emotion in dialogue: the REC Corpus
ACLstudent '09 Proceedings of the ACL-IJCNLP 2009 Student Research Workshop
Unsupervised modeling of Twitter conversations
HLT '10 Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics
Classifying dialogue acts in one-on-one live chats
EMNLP '10 Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing
Dialogue act modeling in a complex task-oriented domain
SIGDIAL '10 Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Automatic analysis of affective postures and body motion to detect engagement with a game companion
Proceedings of the 6th international conference on Human-robot interaction
An affect-enriched dialogue act classification model for task-oriented dialogue
HLT '11 Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1
Characterizing the effectiveness of tutorial dialogue with hidden markov models
ITS'10 Proceedings of the 10th international conference on Intelligent Tutoring Systems - Volume Part I
Learning the Structure of Task-Driven Human–Human Dialogs
IEEE Transactions on Audio, Speech, and Language Processing
Unsupervised modeling of dialog acts in asynchronous conversations
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Three
Hi-index | 0.00 |
Dialogue act modeling in task-oriented dialogue poses significant challenges. It is particularly challenging for corpora consisting of two interleaved communication streams: a dialogue stream and a task stream. In such corpora, information can be conveyed implicitly by the task stream, yielding a dialogue stream with seemingly missing information. A promising approach leverages rich resources from both the dialog and the task streams, combining verbal and non-verbal features. This paper presents work on dialogue act modeling that leverages body posture, which may be indicative of particular dialogue acts. Combining three information sources (dialogue exchanges, task context, and users' posture), three types of machine learning frameworks were compared. The results indicate that some models better preserve the structure of task-oriented dialogue than others, and that automatically recognized postural features may help to disambiguate user dialogue moves.