An artificial discourse language for collaborative negotiation
AAAI '94 Proceedings of the twelfth national conference on Artificial intelligence (vol. 1)
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Learning mixed initiative dialog strategies by using reinforcement learning on both conversants
HLT '05 Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing
Testing the performance of spoken dialogue systems by means of an artificially simulated user
Artificial Intelligence Review
Models of Culture for Virtual Human Conversation
UAHCI '09 Proceedings of the 5th International Conference on Universal Access in Human-Computer Interaction. Part III: Applications and Services
Hybrid approach to user intention modeling for dialog simulation
ACLShort '09 Proceedings of the ACL-IJCNLP 2009 Conference Short Papers
Learning dialogue strategies from older and younger simulated users
SIGDIAL '10 Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Scaling POMDPs for Spoken Dialog Management
IEEE Transactions on Audio, Speech, and Language Processing
An annotation scheme for cross-cultural argumentation and persuasion dialogues
SIGDIAL '11 Proceedings of the SIGDIAL 2011 Conference
Reinforcement learning of question-answering dialogue policies for virtual museum guides
SIGDIAL '12 Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Efficient cultural models of verbal behavior for communicative agents
IVA'12 Proceedings of the 12th international conference on Intelligent Virtual Agents
Hi-index | 0.00 |
We build culture-specific dialogue policies of virtual humans for negotiation and in particular for argumentation and persuasion. In order to do that we use a corpus of non-culture specific dialogues and we build simulated users (SUs), i.e. models that simulate the behavior of real users. Then using these SUs and Reinforcement Learning (RL) we learn negotiation dialogue policies. Furthermore, we use research findings about specific cultures in order to tweak both the SUs and the reward functions used in RL towards a particular culture. We evaluate the learned policies in a simulation setting. Our results are consistent with our SU manipulations and RL reward functions.