Reasoning with Inconsistencies in Propositional Peer-to-Peer Inference Systems

  • Authors:
  • Ph. Chatalic;G. H. Nguyen;M. Ch. Rousset

  • Affiliations:
  • LRI-PCRI-Université Paris-Sud 11, Orsay, France. chatalic@lri.fr;LRI-PCRI-Université Paris-Sud 11, Orsay, France. chatalic@lri.fr;LRI-PCRI-Université Paris-Sud 11, Orsay, France. chatalic@lri.fr

  • Venue:
  • Proceedings of the 2006 conference on ECAI 2006: 17th European Conference on Artificial Intelligence August 29 -- September 1, 2006, Riva del Garda, Italy
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

In a peer-to-peer inference system, there is no centralized control or hierarchical organization: each peer is equivalent in functionality and cooperates with other peers in order to solve a collective reasoning task. Since peer theories model possibly different viewpoints, even if each local theory is consistent, the global theory may be inconsistent. We exhibit a distributed algorithm detecting inconsistencies in a fully decentralized setting. We provide a fully distributed reasoning algorithm, which computes only well-founded consequences of a formula, i.e., with a consistent set of support.