Linear resolution for consequence finding
Artificial Intelligence
Database Updates through Abduction
VLDB '90 Proceedings of the 16th International Conference on Very Large Data Bases
Hypotheses refinement under topological communication constraints
Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems
DARE: a system for distributed abductive reasoning
Autonomous Agents and Multi-Agent Systems
Distributed reasoning in a peer-to-peer setting: application to the semantic web
Journal of Artificial Intelligence Research
Towards efficient multi-agent abduction protocols
LADS'10 Proceedings of the Third international conference on Languages, methodologies, and development tools for multi-agent systems
Cooperative dialogues with conditional arguments
Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems - Volume 1
Hi-index | 0.00 |
What happens when distributed sources of information (agents) hold and acquire information locally, and have to communicate with neighbouring agents in order to refine their hypothesis regarding the actual global state of this environment? This question occurs when it is not be possible (e. g. for practical or privacy concerns) to collect observations and knowledge, and centrally compute the resulting theory. In this paper, we assume that agents are equipped with full clausal theories and individually face abductive tasks, in a globally consistent environment. We adopt a learner/critic approach. Previous work in this line mostly relied on some assumptions of compositionality (which allow to treat each piece of exchanged information separately). Because no shared background knowledge is assumed to start with, this does not hold here. We design a protocol guaranteeing convergence to a situation “sufficiently” satisfying as far as consistency of the system is concerned, and discuss its other properties.