Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Information state and dialogue management in the TRINDI dialogue move engine toolkit
Natural Language Engineering
Trainable methods for surface natural language generation
NAACL 2000 Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference
Generation that exploits corpus-based statistical knowledge
COLING '98 Proceedings of the 17th international conference on Computational linguistics - Volume 1
Exploiting a probabilistic hierarchical model for generation
COLING '00 Proceedings of the 18th conference on Computational linguistics - Volume 1
Automatic optimization of dialogue management
COLING '00 Proceedings of the 18th conference on Computational linguistics - Volume 1
Empirically estimating order constraints for content planning in generation
ACL '01 Proceedings of the 39th Annual Meeting on Association for Computational Linguistics
Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
Natural Language Engineering
Evaluation of a hierarchical reinforcement learning spoken dialogue system
Computer Speech and Language
Hierarchical reinforcement learning with the MAXQ value function decomposition
Journal of Artificial Intelligence Research
Optimizing dialogue management with reinforcement learning: experiments with the NJFun system
Journal of Artificial Intelligence Research
Noun phrase generation for situated dialogs
INLG '06 Proceedings of the Fourth International Natural Language Generation Conference
The use of spatial relations in referring expression generation
INLG '08 Proceedings of the Fifth International Natural Language Generation Conference
Learning to adapt to unknown users: referring expression generation in spoken dialogue systems
ACL '10 Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics
Optimising information presentation for spoken dialogue systems
ACL '10 Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics
Automated planning for situated natural language generation
ACL '10 Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics
Hierarchical reinforcement learning for adaptive text generation
INLG '10 Proceedings of the 6th International Natural Language Generation Conference
The first challenge on generating instructions in virtual environments
Empirical methods in natural language generation
HLT '11 Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers - Volume 2
HLT '11 Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers - Volume 2
Adaptive information presentation for spoken dialogue systems: evaluation with human subjects
ENLG '11 Proceedings of the 13th European Workshop on Natural Language Generation
ENLG '11 Proceedings of the 13th European Workshop on Natural Language Generation
The GRUVE challenge: generating routes under uncertainty in virtual environments
ENLG '11 Proceedings of the 13th European Workshop on Natural Language Generation
The Bremen system for the GIVE-2.5 challenge
ENLG '11 Proceedings of the 13th European Workshop on Natural Language Generation
Optimising incremental generation for spoken dialogue systems: reducing the need for fillers
INLG '12 Proceedings of the Seventh International Natural Language Generation Conference
Hi-index | 0.00 |
Natural language generators are faced with a multitude of different decisions during their generation process. We address the joint optimisation of navigation strategies and referring expressions in a situated setting with respect to task success and human-likeness. To this end, we present a novel, comprehensive framework that combines supervised learning, Hierarchical Reinforcement Learning and a hierarchical Information State. A human evaluation shows that our learnt instructions are rated similar to human instructions, and significantly better than the supervised learning baseline.