Reinforcement Learning
Information state and dialogue management in the TRINDI dialogue move engine toolkit
Natural Language Engineering
Towards developing general models of usability with PARADISE
Natural Language Engineering
Quantitative and qualitative evaluation of Darpa Communicator spoken dialogue systems
ACL '01 Proceedings of the 39th Annual Meeting on Association for Computational Linguistics
Using machine learning to explore human multimodal clarification strategies
COLING-ACL '06 Proceedings of the COLING/ACL on Main conference poster sessions
Optimizing dialogue management with reinforcement learning: experiments with the NJFun system
Journal of Artificial Intelligence Research
Adapting the interaction state model in conversational recommender systems
Proceedings of the 10th international conference on Electronic commerce
Evaluating user simulations with the Cramér-von Mises divergence
Speech Communication
Hybrid reinforcement/supervised learning of dialogue policies from fixed data sets
Computational Linguistics
Automatic annotation of context and speech acts for dialogue corpora
Natural Language Engineering
Learning effective and engaging strategies for advice-giving human-machine dialogue
Natural Language Engineering
An Online Algorithm for Applying Reinforcement Learning to Handle Ambiguity in Spoken Dialogues
TAMC '09 Proceedings of the 6th Annual Conference on Theory and Applications of Models of Computation
The Knowledge Engineering Review
Dialogue management based on entities and constraints
SIGDIAL '10 Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Learning dialogue strategies from older and younger simulated users
SIGDIAL '10 Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue
User Modeling and User-Adapted Interaction
Hi-index | 0.00 |
We explore the use of restricted dialogue contexts in reinforcement learning (RL) of effective dialogue strategies for information seeking spoken dialogue systems (e.g. COMMUNICATOR (Walker et al., 2001)). The contexts we use are richer than previous research in this area, e.g. (Levin and Pieraccini, 1997; Scheffler and Young, 2001; Singh et al., 2002; Pietquin, 2004), which use only slot-based information, but are much less complex than the full dialogue "Information States" explored in (Henderson et al., 2005), for which tractabe learning is an issue. We explore how incrementally adding richer features allows learning of more effective dialogue strategies. We use 2 user simulations learned from COMMUNICATOR data (Walker et al., 2001; Georgila et al., 2005b) to explore the effects of different features on learned dialogue strategies. Our results show that adding the dialogue moves of the last system and user turns increases the average reward of the automatically learned strategies by 65.9% over the original (hand-coded) COMMUNICATOR systems, and by 7.8% over a baseline RL policy that uses only slot-status features. We show that the learned strategies exhibit an emergent "focus switching" strategy and effective use of the 'give help' action.