Artificial Intelligence
A logic-based theory of deductive arguments
Artificial Intelligence
Acceptability of arguments as `logical uncertainty'
ECSQARU '93 Proceedings of the European Conference on Symbolic and Quantitative Approaches to Reasoning and Uncertainty
Defeasible logic programming: an argumentative approach
Theory and Practice of Logic Programming
On the evaluation of argumentation formalisms
Artificial Intelligence
Bridging the Gap between Abstract Argumentation Systems and Logic
SUM '09 Proceedings of the 3rd International Conference on Scalable Uncertainty Management
On the relation between argumentation and non-monotonic coherence based entailment
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
A formal analysis of logic-based argumentation systems
SUM'10 Proceedings of the 4th international conference on Scalable uncertainty management
Instantiating abstract argumentation with classical logic arguments: Postulates and properties
Artificial Intelligence
The outcomes of logic-based argumentation systems under preferred semantics
SUM'12 Proceedings of the 6th international conference on Scalable Uncertainty Management
Hi-index | 0.00 |
This paper investigates the outputs of abstract logic-based argumentation systems under stable semantics. We delimit the number of stable extensions a system may have. We show that in the best case, an argumentation system infers exactly the common conclusions drawn from the maximal consistent subbases of the original knowledge base. This output corresponds to that returned by a system under the naive semantics. In the worst case, counter-intuitive results are returned. In the intermediary case, the system forgets intuitive conclusions. These two latter cases are due to the use of skewed attack relations. The results show that stable semantics is either useless or unsuitable in logic-based argumentation systems. Finally, we show that under this semantics, argumentation systems may inherit the problems of coherence-based approaches.