Introduction to Bayesian Networks
Introduction to Bayesian Networks
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Towards developing general models of usability with PARADISE
Natural Language Engineering
PARADISE: a framework for evaluating spoken dialogue agents
ACL '98 Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and Eighth Conference of the European Chapter of the Association for Computational Linguistics
Generation that exploits corpus-based statistical knowledge
COLING '98 Proceedings of the 17th international conference on Computational linguistics - Volume 1
Exploiting a probabilistic hierarchical model for generation
COLING '00 Proceedings of the 18th conference on Computational linguistics - Volume 1
Stochastic language generation for spoken dialogue systems
ANLP/NAACL-ConvSyst '00 Proceedings of the 2000 ANLP/NAACL Workshop on Conversational systems - Volume 3
Bootstrapping lexical choice via multiple-sequence alignment
EMNLP '02 Proceedings of the ACL-02 conference on Empirical methods in natural language processing - Volume 10
Natural Language Engineering
Hierarchical reinforcement learning with the MAXQ value function decomposition
Journal of Artificial Intelligence Research
Learning to adapt to unknown users: referring expression generation in spoken dialogue systems
ACL '10 Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics
Optimising information presentation for spoken dialogue systems
ACL '10 Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics
Phrase-based statistical language generation using graphical models and active learning
ACL '10 Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics
A simple domain-independent probabilistic approach to generation
EMNLP '10 Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing
Hierarchical reinforcement learning for adaptive text generation
INLG '10 Proceedings of the 6th International Natural Language Generation Conference
The first challenge on generating instructions in virtual environments
Empirical methods in natural language generation
Spatially-aware dialogue control using hierarchical reinforcement learning
ACM Transactions on Speech and Language Processing (TSLP)
The Bremen system for the GIVE-2.5 challenge
ENLG '11 Proceedings of the 13th European Workshop on Natural Language Generation
Comparing HMMs and Bayesian networks for surface realisation
NAACL HLT '12 Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Optimising incremental dialogue decisions using information density for interactive systems
EMNLP-CoNLL '12 Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning
Optimising incremental generation for spoken dialogue systems: reducing the need for fillers
INLG '12 Proceedings of the Seventh International Natural Language Generation Conference
Hierarchical Social Network Analysis Using a Multi-Agent System: A School System Case
International Journal of Agent Technologies and Systems
Hi-index | 0.00 |
Language generators in situated domains face a number of content selection, utterance planning and surface realisation decisions, which can be strictly interdependent. We therefore propose to optimise these processes in a joint fashion using Hierarchical Reinforcement Learning. To this end, we induce a reward function for content selection and utterance planning from data using the PARADISE framework, and suggest a novel method for inducing a reward function for surface realisation from corpora. It is based on generation spaces represented as Bayesian Networks. Results in terms of task success and human-likeness suggest that our unified approach performs better than a baseline optimised in isolation or a greedy or random baseline. It receives human ratings close to human authors.