Machine Learning
Games That Agents Play: A Formal Framework for Dialogues between Autonomous Agents
Journal of Logic, Language and Information
Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
Argument based machine learning
Artificial Intelligence
Collaborative plans for group activities
IJCAI'93 Proceedings of the 13th international joint conference on Artifical intelligence - Volume 1
Resource use pattern analysis for predicting resource availability in opportunistic grids
Concurrency and Computation: Practice & Experience - Advanced Scheduling Strategies and Grid Programming Environments
Agent Support for Policy-Driven Collaborative Mission Planning
The Computer Journal
On the benefits of argumentation-derived evidence in learning policies
ArgMAS'10 Proceedings of the 7th international conference on Argumentation in Multi-Agent Systems
Exploiting domain knowledge in making delegation decisions
ADMI'11 Proceedings of the 7th international conference on Agents and Data Mining Interaction
Probabilistic argumentation frameworks
TAFA'11 Proceedings of the First international conference on Theory and Applications of Formal Argumentation
Learning strategies for task delegation in norm-governed environments
Autonomous Agents and Multi-Agent Systems
Argumentation strategies for collaborative plan resourcing
ArgMAS'11 Proceedings of the 8th international conference on Argumentation in Multi-Agent Systems
Opponent modelling in persuasion dialogues
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Hi-index | 0.01 |
What do I need to say to convince you to do something? This is an important question for an autonomous agent deciding whom to approach for a resource or for an action to be done. Were similar requests granted from similar agents in similar circumstances? What arguments were most persuasive? What are the costs involved in putting certain arguments forward? In this paper we present an agent decision-making mechanism where models of other agents are refined through evidence from past dialogues, and where these models are used to guide future argumentation strategy. We empirically evaluate our approach to demonstrate that decision-theoretic and machine learning techniques can both significantly improve the cumulative utility of dialogical outcomes, and help to reduce communication overhead.