Evaluating tutors that listen: an overview of project LISTEN
Smart machines in education
The Architecture of Why2-Atlas: A Coach for Qualitative Physics Essay Writing
ITS '02 Proceedings of the 6th International Conference on Intelligent Tutoring Systems
How to find trouble in communication
Speech Communication - Special issue on speech and emotion
Towards developing general models of usability with PARADISE
Natural Language Engineering
PARADISE: a framework for evaluating spoken dialogue agents
ACL '98 Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and Eighth Conference of the European Chapter of the Association for Computational Linguistics
HLT '91 Proceedings of the workshop on Speech and Natural Language
Predicting student emotions in computer-human tutoring dialogues
ACL '04 Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics
Spoken Versus Typed Human and Computer Dialogue Tutoring
International Journal of Artificial Intelligence in Education
WIRE: a wearable spoken language understanding system for the military
NAACL-HLT-Dialog '07 Proceedings of the Workshop on Bridging the Gap: Academic and Industrial Research in Dialog Technologies
Comparing Linguistic Features for Modeling Learning in Computer Tutoring
Proceedings of the 2007 conference on Artificial Intelligence in Education: Building Technology Rich Learning Contexts That Work
Exploiting discourse structure for spoken dialogue performance analysis
EMNLP '06 Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing
Designing and evaluating a wizarded uncertainty-adaptive spoken dialogue tutoring system
Computer Speech and Language
International Journal of Artificial Intelligence in Education - Special issue on Best of ITS 2010
Evaluating language understanding accuracy with respect to objective outcomes in a dialogue system
EACL '12 Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics
When Does Disengagement Correlate with Performance in Spoken Dialog Computer Tutoring?
International Journal of Artificial Intelligence in Education - Best of AIED 2011
Hi-index | 0.00 |
We investigate using the PARADISE framework to develop predictive models of system performance in our spoken dialogue tutoring system. We represent performance with two metrics: user satisfaction and student learning. We train and test predictive models of these metrics in our tutoring system corpora. We predict user satisfaction with 2 parameter types: 1) system-generic, and 2) tutoring-specific. To predict student learning, we also use a third type: 3) user affect. Although generic parameters are useful predictors of user satisfaction in other PARADISE applications, overall our parameters produce less useful user satisfaction models in our system. However, generic and tutoring-specific parameters do produce useful models of student learning in our system. User affect parameters can increase the usefulness of these models.