Temporal ontology and temporal reference
Computational Linguistics - Special issue on tense and aspect
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Towards developing general models of usability with PARADISE
Natural Language Engineering
Understanding temporal expressions in emails
HLT-NAACL '06 Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics
Annotating temporal information: from theory to practice
HLT '02 Proceedings of the second international conference on Human Language Technology Research
Individual and domain adaptation in sentence planning for dialogue
Journal of Artificial Intelligence Research
Choosing words in computer-generated weather forecasts
Artificial Intelligence - Special volume on connecting language to the world
Learning to adapt to unknown users: referring expression generation in spoken dialogue systems
ACL '10 Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics
Optimising information presentation for spoken dialogue systems
ACL '10 Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics
Talkin' bout a revolution (statistically speaking)
ENLG '11 Proceedings of the 13th European Workshop on Natural Language Generation
Adaptive information presentation for spoken dialogue systems: evaluation with human subjects
ENLG '11 Proceedings of the 13th European Workshop on Natural Language Generation
Hi-index | 0.00 |
Generating Temporal Expressions (TE) that are easy to understand, unambiguous, and reasonably short is a challenge for humans and Spoken Dialogue Systems. Rather than developing hand-written decision rules, we adopt a data-driven approach by collecting user feedback on a variety of possible TEs in terms of task success, ambiguity, and user preference. The data collected in this work is freely available to the research community. These data were then used to train a simulated user and a reinforcement learning policy that learns an adaptive Temporal Expression generation strategy for a variety of contexts. We evaluate our learned policy both in simulation and with real users and show that this data-driven adaptive policy is a significant improvement over a rule-based adaptive policy, leading to a 24% increase in perceived task completion, while showing a small increase in actual task completion, and a 16% decrease in call duration. This means that dialogues are more efficient and that users are also more confident about the appointment that they have agreed with the system.