Defeasible reasoning and decision support systems
Decision Support Systems
Reasoning about truth (research note)
Artificial Intelligence
Truth and meaning (research note)
Artificial Intelligence
Inconsistency in possibilistic knowledge bases: to live with it or not live with it
Fuzzy logic for the management of uncertainty
Artificial Intelligence
A symbolic approach to reasoning with linguistic quantifiers
UAI '92 Proceedings of the eighth conference on Uncertainty in Artificial Intelligence
Arguments, contradictions and practical reasoning
ECAI '92 Proceedings of the 10th European conference on Artificial intelligence
Making inconsistency respectable: a logical framework for inconsistency in reasoning
FAIR '91 Proceedings of the International Workshop on Fundamentals of Artificial Intelligence Research
On the comparison of theories: preferring the most specific explanation
IJCAI'85 Proceedings of the 9th international joint conference on Artificial intelligence - Volume 1
Ex contradictione nihil sequitur
IJCAI'91 Proceedings of the 12th international joint conference on Artificial intelligence - Volume 1
On the evaluation of argumentation formalisms
Artificial Intelligence
Generating possible intentions with constrained argumentation systems
International Journal of Approximate Reasoning
A modal semantics for an argumentation-based pragmatics for agent communication
ArgMAS'04 Proceedings of the First international conference on Argumentation in Multi-Agent Systems
Specifying and implementing a persuasion dialogue game using commitments and arguments
ArgMAS'04 Proceedings of the First international conference on Argumentation in Multi-Agent Systems
Symmetric argumentation frameworks
ECSQARU'05 Proceedings of the 8th European conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty
Gradual valuation for bipolar argumentation frameworks
ECSQARU'05 Proceedings of the 8th European conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty
A computational model for conversation policies for agent communication
CLIMA'04 Proceedings of the 5th international conference on Computational Logic in Multi-Agent Systems
Hi-index | 0.00 |
From an inconsistent database non-trivial arguments may be constructed both for a proposition, and for the contrary of that proposition. Therefore, inconsistency in a logical database causes uncertainty about which conclusions to accept. This kind of uncertainty is called logical uncertainty. We define a concept of "acceptability", which induces a means for differentiating arguments. The more acceptable an argument, the more confident we are in it. A specific interest is to use the acceptability classes to assign linguistic qualifiers to propositions, such that the qualifier assigned to a propositions reflects its logical uncertainty. A more general interest is to understand how classes of acceptability can be defined for arguments constructed from an inconsistent database, and how this notion of acceptability can be devised to reflect different criteria. Whilst concentrating on the aspects of assigning linguistic qualifiers to propositions, we also indicate the more general significance of the notion of acceptability.