Propositional Distances and Preference Representation
ECSQARU '01 Proceedings of the 6th European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty
Towards mathematical morpho-logics
Technologies for constructing intelligent systems
Implementing semantic merging operators using binary decision diagrams
International Journal of Approximate Reasoning
A general framework for reasoning about inconsistency
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
On the measure of conflicts: Shapley Inconsistency Values
Artificial Intelligence
Measuring consistency gain and information loss in stepwise inconsistency resolution
ECSQARU'11 Proceedings of the 11th European conference on Symbolic and quantitative approaches to reasoning with uncertainty
Measuring and repairing inconsistency in knowledge bases with graded truth
Fuzzy Sets and Systems
Inconsistency measures for probabilistic logics
Artificial Intelligence
Hi-index | 0.00 |
There have been a number of proposals for measuring inconsistency in a knowledgebase (i.e. a set of logical formulae). These include measures that consider the minimally inconsistent subsets of the knowledgebase, and measures that consider the paraconsistent models (3 or 4 valued models) of the knowledgebase. In this paper, we present a new approach that considers the amount each formula has to be weakened in order for the knowledgebase to be consistent. This approach is based on ideas of knowledge merging by Konienczny and Pino-Perez. We show that this approach gives us measures that are different from existing measures, that have desirable properties, and that can take the significance of inconsistencies into account. The latter is useful when we want to differentiate between inconsistencies that have minor significance from inconsistencies that have major significance. We also show how our measures are potentially useful in applications such as evaluating violations of integrity constraints in databases.