The use of explicit user models in text generation: tailoring to a user's level of expertise
The use of explicit user models in text generation: tailoring to a user's level of expertise
Generating descriptions that exploit a user's domain knowledge
Current research in natural language generation
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Separating Skills from Preference: Using Learning to Program by Reward
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
Tailoring lexical choice to the user's vocabulary in multimedia explanation generation
ACL '93 Proceedings of the 31st annual meeting on Association for Computational Linguistics
Cooking up referring expressions
ACL '89 Proceedings of the 27th annual meeting on Association for Computational Linguistics
A Bayesian approach for user modeling in dialogue systems
COLING '94 Proceedings of the 15th conference on Computational linguistics - Volume 2
The Knowledge Engineering Review
Natural language generation as planning under uncertainty for spoken dialogue systems
EACL '09 Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics
Learning lexical alignment policies for generating referring expressions in spoken dialogue systems
ENLG '09 Proceedings of the 12th European Workshop on Natural Language Generation
ENLG '09 Proceedings of the 12th European Workshop on Natural Language Generation
Agenda-based user simulation for bootstrapping a POMDP dialogue system
NAACL-Short '07 Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Companion Volume, Short Papers
SIGDIAL '09 Proceedings of the SIGDIAL 2009 Conference: The 10th Annual Meeting of the Special Interest Group on Discourse and Dialogue
A multiagent approach to obtain open and flexible user models in adaptive learning communities
UM'03 Proceedings of the 9th international conference on User modeling
Optimising information presentation for spoken dialogue systems
ACL '10 Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics
Optimising information presentation for spoken dialogue systems
ACL '10 Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics
INLG '10 Proceedings of the 6th International Natural Language Generation Conference
Adaptive referring expression generation in spoken dialogue systems: evaluation with real users
SIGDIAL '10 Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Spatially-aware dialogue control using hierarchical reinforcement learning
ACM Transactions on Speech and Language Processing (TSLP)
HLT '11 Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers - Volume 2
Optimising natural language generation decision making for situated dialogue
SIGDIAL '11 Proceedings of the SIGDIAL 2011 Conference
SIGDIAL '11 Proceedings of the SIGDIAL 2011 Conference
Talkin' bout a revolution (statistically speaking)
ENLG '11 Proceedings of the 13th European Workshop on Natural Language Generation
ENLG '11 Proceedings of the 13th European Workshop on Natural Language Generation
ENLG '11 Proceedings of the 13th European Workshop on Natural Language Generation
Optimising incremental dialogue decisions using information density for interactive systems
EMNLP-CoNLL '12 Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning
Optimising incremental generation for spoken dialogue systems: reducing the need for fillers
INLG '12 Proceedings of the Seventh International Natural Language Generation Conference
The listening talker: A review of human and algorithmic context-induced modifications of speech
Computer Speech and Language
Hi-index | 0.00 |
We present a data-driven approach to learn user-adaptive referring expression generation (REG) policies for spoken dialogue systems. Referring expressions can be difficult to understand in technical domains where users may not know the technical 'jargon' names of the domain entities. In such cases, dialogue systems must be able to model the user's (lexical) domain knowledge and use appropriate referring expressions. We present a reinforcement learning (RL) framework in which the system learns REG policies which can adapt to unknown users online. Furthermore, unlike supervised learning methods which require a large corpus of expert adaptive behaviour to train on, we show that effective adaptive policies can be learned from a small dialogue corpus of non-adaptive human-machine interaction, by using a RL framework and a statistical user simulation. We show that in comparison to adaptive hand-coded baseline policies, the learned policy performs significantly better, with an 18.6% average increase in adaptation accuracy. The best learned policy also takes less dialogue time (average 1.07 min less) than the best hand-coded policy. This is because the learned policies can adapt online to changing evidence about the user's domain expertise.