Argumentation in artificial intelligence
Artificial Intelligence
On principle-based evaluation of extension-based argumentation semantics
Artificial Intelligence
Mechanism design for abstract argumentation
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 2
Classification and strategical issues of argumentation games on structured argumentation frameworks
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
Argumentation Mechanism Design for Preferred Semantics
Proceedings of the 2010 conference on Computational Models of Argument: Proceedings of COMMA 2010
Review: logical mechanism design
The Knowledge Engineering Review
An implementation of basic argumentation components
The 10th International Conference on Autonomous Agents and Multiagent Systems - Volume 3
Change in argumentation systems: exploring the interest of removing an argument
SUM'11 Proceedings of the 5th international conference on Scalable uncertainty management
On strategic argument selection in structured argumentation systems
ArgMAS'10 Proceedings of the 7th international conference on Argumentation in Multi-Agent Systems
Assumption-based argumentation dialogues
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume One
Case-based strategies for argumentation dialogues in agent societies
Information Sciences: an International Journal
Reasoning about dialogical strategies
KES'12 Proceedings of the 16th international conference on Knowledge Engineering, Machine Learning and Lattice Computing with Applications
Audience-based uncertainty in abstract argument games
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Opponent models with uncertainty for strategic argumentation
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Hi-index | 0.00 |
Recently, Argumentation Mechanism Design (ArgMD) was introduced as a new paradigm for studying argumentation among self-interested agents using game-theoretic techniques. Preliminary results showed a condition under which a direct mechanism based on Dung's grounded semantics is strategy-proof (i.e. truth enforcing). But these early results dealt with a highly restricted form of agent preferences, and assumed agents can only hide, but not lie about, arguments. In this paper, we characterise strategy-proofness under grounded semantics for a more realistic preference class (namely, focal arguments). We also provide the first analysis of the case where agents can lie.