Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Separating Skills from Preference: Using Learning to Program by Reward
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
A Bayesian approach for user modeling in dialogue systems
COLING '94 Proceedings of the 15th conference on Computational linguistics - Volume 2
The Knowledge Engineering Review
Natural language generation as planning under uncertainty for spoken dialogue systems
EACL '09 Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics
Learning lexical alignment policies for generating referring expressions in spoken dialogue systems
ENLG '09 Proceedings of the 12th European Workshop on Natural Language Generation
ENLG '09 Proceedings of the 12th European Workshop on Natural Language Generation
Agenda-based user simulation for bootstrapping a POMDP dialogue system
NAACL-Short '07 Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Companion Volume, Short Papers
SIGDIAL '09 Proceedings of the SIGDIAL 2009 Conference: The 10th Annual Meeting of the Special Interest Group on Discourse and Dialogue
A multiagent approach to obtain open and flexible user models in adaptive learning communities
UM'03 Proceedings of the 9th international conference on User modeling
Learning to adapt to unknown users: referring expression generation in spoken dialogue systems
ACL '10 Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics
Optimising information presentation for spoken dialogue systems
ACL '10 Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics
Hierarchical reinforcement learning for adaptive text generation
INLG '10 Proceedings of the 6th International Natural Language Generation Conference
Talkin' bout a revolution (statistically speaking)
ENLG '11 Proceedings of the 13th European Workshop on Natural Language Generation
Adaptive information presentation for spoken dialogue systems: evaluation with human subjects
ENLG '11 Proceedings of the 13th European Workshop on Natural Language Generation
Hi-index | 0.00 |
We present new results from a real-user evaluation of a data-driven approach to learning user-adaptive referring expression generation (REG) policies for spoken dialogue systems. Referring expressions can be difficult to understand in technical domains where users may not know the technical 'jargon' names of the domain entities. In such cases, dialogue systems must be able to model the user's (lexical) domain knowledge and use appropriate referring expressions. We present a reinforcement learning (RL) framework in which the system learns REG policies which can adapt to unknown users online. For real users of such a system, we show that in comparison to an adaptive hand-coded baseline policy, the learned policy performs significantly better, with a 20.8% average increase in adaptation accuracy, 12.6% decrease in time taken, and a 15.1% increase in task completion rate. The learned policy also has a significantly better subjective rating from users. This is because the learned policies adapt online to changing evidence about the user's domain expertise. We also discuss the issue of evaluation in simulation versus evaluation with real users.