Plan-based integration of natural language and graphics generation
Artificial Intelligence - Special volume on natural language processing
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Conversation as Action Under Uncertainty
UAI '00 Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence
Towards developing general models of usability with PARADISE
Natural Language Engineering
Trainable sentence planning for complex information presentation in spoken dialog systems
ACL '04 Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics
Learning to say it well: reranking realizations by predicted synthesis quality
ACL-44 Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics
Hybrid reinforcement/supervised learning of dialogue policies from fixed data sets
Computational Linguistics
Mixture model POMDPs for efficient handling of uncertainty in dialogue management
HLT-Short '08 Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies: Short Papers
What game theory can do for NLG: the case of vague language
ENLG '09 Proceedings of the 12th European Workshop on Natural Language Generation
Training and evaluation of the HIS POMDP dialogue system in noise
SIGdial '08 Proceedings of the 9th SIGdial Workshop on Discourse and Dialogue
Optimizing dialogue management with reinforcement learning: experiments with the NJFun system
Journal of Artificial Intelligence Research
Individual and domain adaptation in sentence planning for dialogue
Journal of Artificial Intelligence Research
Reinforcement learning for mapping instructions to actions
ACL '09 Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1 - Volume 1
Learning adaptive referring expression generation policies for spoken dialogue systems
Empirical methods in natural language generation
Learning adaptive referring expression generation policies for spoken dialogue systems
Empirical methods in natural language generation
Empirical methods in natural language generation
Assessing user simulation for dialog systems using human judges and automatic evaluation measures
Natural Language Engineering
Hi-index | 0.00 |
We present and evaluate a new model for Natural Language Generation (NLG) in Spoken Dialogue Systems, based on statistical planning, given noisy feedback from the current generation context (e.g. a user and a surface realiser). The model is adaptive and incremental at the turn level, and optimises NLG actions with respect to a data-driven objective function. We study its use in a standard NLG problem: how to present information (in this case a set of search results) to users, given the complex trade-offs between utterance length, amount of information conveyed, and cognitive load. We set these trade-offs in an objective function by analysing existing match data. We then train a NLG policy using Reinforcement Learning (RL), which adapts its behaviour to noisy feedback from the current generation context. This policy is compared to several baselines derived from previous work in this area. The learned policy significantly outperforms all the prior approaches.