A characterisation of strategy-proofness for grounded argumentation semantics

  • Authors:
  • Iyad Rahwan;Kate Larson;Fernando Tohmé

  • Affiliations:
  • Faculty of Informatics, British University in Dubai, Dubai, UAE and School of Informatics, University of Edinburgh, UK;Cheriton School of Computer Science, University of Waterloo, Canada;LIDIA, Universidad Nacional del Sur, Bahía Blanca, CONICET, Argentina

  • Venue:
  • IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recently, Argumentation Mechanism Design (ArgMD) was introduced as a new paradigm for studying argumentation among self-interested agents using game-theoretic techniques. Preliminary results showed a condition under which a direct mechanism based on Dung's grounded semantics is strategy-proof (i.e. truth enforcing). But these early results dealt with a highly restricted form of agent preferences, and assumed agents can only hide, but not lie about, arguments. In this paper, we characterise strategy-proofness under grounded semantics for a more realistic preference class (namely, focal arguments). We also provide the first analysis of the case where agents can lie.