Representing and reasoning with probabilistic knowledge: a logical approach to probabilities
Representing and reasoning with probabilistic knowledge: a logical approach to probabilities
Efficient policy-based inconsistency management in relational knowledge bases
SUM'10 Proceedings of the 4th international conference on Scalable uncertainty management
Plato: a compiler for interactive web forms
PADL'11 Proceedings of the 13th international conference on Practical aspects of declarative languages
Detecting and repairing anomalous evolutions in noisy environments
Annals of Mathematics and Artificial Intelligence
Contributions to personalizable knowledge integration
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Three
Distance-Based measures of inconsistency
ECSQARU'13 Proceedings of the 12th European conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty
Policy-based inconsistency management in relational databases
International Journal of Approximate Reasoning
Hi-index | 0.00 |
Numerous logics have been developed for reasoning about inconsistency which differ in (i) the logic to which they apply, and (ii) the criteria used to draw inferences. In this paper, we propose a general framework for reasoning about inconsistency in a wide variety of logics including ones for which inconsistency resolution methods have not yet been studied (e.g. various temporal and epistemic logics). We start with Tarski and Scott's axiomatization of logics, but drop their monotonicity requirements that we believe are too strong for AI. For such a logic L, we define the concept of an option. Options are sets of formulas in L that are closed and consistent according to the notion of consequence and consistency in L. We show that by defining an appropriate preference relation on options, we can capture several existing works such as Brewka's subtheories. We also provide algorithms to compute most preferred options.