Investigating the portability of corpus-derived cue phrases for dialogue act classification
COLING '08 Proceedings of the 22nd International Conference on Computational Linguistics - Volume 1
Designing and evaluating a wizarded uncertainty-adaptive spoken dialogue tutoring system
Computer Speech and Language
Domain adaptation with unlabeled data for dialog act tagging
DANLP 2010 Proceedings of the 2010 Workshop on Domain Adaptation for Natural Language Processing
NAACL HLT '12 Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Discovering habits of effective online support group chatrooms
Proceedings of the 17th ACM international conference on Supporting group work
Hi-index | 0.00 |
Many dialogue system developers use data gathered from previous versions of the dialogue system to build models which enable the system to detect and respond to users' affect. Previous work in the dialogue systems community for domain adaptation has shown that large differences between versions of dialogue systems affect performance of ported models. Thus, we wish to investigate how more minor differences, like small dialogue content changes and switching from a wizarded system to a fully automated system, influence the performance of our affect detection models. We perform a post-hoc experiment where we use various data sets to train multiple models, and compare against a test set from the most recent version of our dialogue system. Analyzing these results strongly suggests that these differences do impact these models' performance.