Knowledge, representation, and rational self-government

  • Authors:
  • Jon Doyle

  • Affiliations:
  • Carnegie-Mellon University, Pittsburgh, Pennsylvania

  • Venue:
  • TARK '88 Proceedings of the 2nd conference on Theoretical aspects of reasoning about knowledge
  • Year:
  • 1988

Quantified Score

Hi-index 0.02

Visualization

Abstract

It is commonplace in artificial intelligence to draw a distinction between the explicit knowledge appearing in an agent's memory and the implicit knowledge it represents. Many AI theories of knowledge assume this representation relation is logical, that is, that implicit knowledge is derived from explicit knowledge via a logic. Such theories, however, are limited in their ability to treat incomplete or inconsistent knowledge in useful ways. We suggest that a more illuminating theory of nonlogical inferences is that they are cases of rational inference, in which the agent rationally (in the sense of decision theory) chooses the conclusions it wishes to adopt. Thus in rational inference, the implicit beliefs depend on the agent's preferences about its states of belief and on its beliefs about its states of belief as well as on the beliefs themselves. The explicit representations possessed by the agent are not viewed as knowledge themselves, but only as materials or prima facie knowledge from which the agent rationally constructs the bases of its actions, so that its actual knowledge, as a set of attitudes, may be either more or less than the attitudes entailed logically by the explicit ones. That is, we keep the idea that the explicit knowledge represents the implicit knowledge, but change the nature of the representation function from logical closure under derivations to rational choice. In this theory, rationality serves as an ideal every bit as attractive as logicality, and moreover, provides satisfying treatments of the cases omitted by the narrow logical view, subsuming and explaining many AI approaches toward reasoning with incomplete and inconsistent knowledge.