Truth Maintenance Systems for Problem Solving
Truth Maintenance Systems for Problem Solving
Coherent cooperation among communicating problem solvers
IEEE Transactions on Computers
Increasing coherence in a distributed problem-solving network
IJCAI'85 Proceedings of the 9th international joint conference on Artificial intelligence - Volume 2
Logical omniscience via proof complexity
CSL'06 Proceedings of the 20th international conference on Computer Science Logic
Dynamics of defeasible and tentative inference
TbiLLC'11 Proceedings of the 9th international conference on Logic, Language, and Computation
Hi-index | 0.00 |
Representing and reasoning about the knowledge an agent (human or computer) must have to accomplish some task is becoming an increasingly important issue in artificial intelligence (AI) research. To reason about an agent's beliefs, an AI system must assume some formal model of those beliefs. An attractive candidate is the Deductive Belief model: an agent's beliefs are described as a set of sentences in some formal language (the base sentences), together with a deductive process for deriving consequences of those beliefs. In particular, a Deductive Belief model can account for the effect of resource limitations on deriving consequences of the base set: an agent need not believe all the logical consequences of his beliefs. In this paper we develop a belief model based on the notion of deduction, and contrast it with current AI formalisms for belief derived from Hintikka/Kripke possible-worlds semantics for knowledge.