A formal model for designing dialogue strategies
AAMAS '06 Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems
Coherence and Flexibility in Dialogue Games for Argumentation
Journal of Logic and Computation
Strategic argumentation: a game theoretical investigation
Proceedings of the 11th international conference on Artificial intelligence and law
On the issue of reinstatement in argumentation
JELIA'06 Proceedings of the 10th European conference on Logics in Artificial Intelligence
Algorithms for decision problems in argument systems under preferred semantics
Artificial Intelligence
Hi-index | 0.00 |
Agents engage in dialogues having as goals to make some arguments acceptable or unacceptable. To do so they may put forward arguments, adding them to the argumentation framework. Argumentation semantics can relate a change in the framework to the resulting extensions but it is not clear, given an argumentation framework and a desired acceptance state for a given set of arguments, which further arguments should be added in order to achieve those justification statuses. Our methodology, called conditional labelling, is based on argument labelling and assigns to each argument three propositional formulae. These formulae describe which arguments should be attacked by the agent in order to get a particular argument in, out, or undecided, respectively. Given a conditional labelling, the agents have a full knowledge about the consequences of the attacks they may raise on the acceptability of each argument without having to recompute the overall labelling of the framework for each possible set of attack they may raise.