Ascribing beliefs to resource bounded agents

  • Authors:
  • Natasha Alechina;Brian Logan

  • Affiliations:
  • University of Nottingham, Nottingham, UK;University of Nottingham, Nottingham, UK

  • Venue:
  • Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 2
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

Logical approaches to reasoning about agents often rely on idealisations about belief ascription and logical omniscience which make it difficult to apply the results obtained to real agents. In this paper, we show how to ascribe beliefs and an ability to reason in an arbitrary decidable logic to an agent in a computationally grounded way. We characterise those cases in which the assumption that an agent is logically omniscient in a given logic is 'harmless' in the sense that it does not lead to making incorrect predictions about the agent, and show that such an assumption is not harmless when our predictions have a temporal dimension: 'now the agent believes p', and the agent requires time to derive the consequences of its beliefs. We present a family of logics for reasoning about the beliefs of an agent which is a perfect reasoner in an arbitrary decidable logic L but only derives the consequences of its beliefs after some delay &Dgr;. We investigate two members of this family in detail, L&Dgr; in which all the consequences are derived at the next tick of the clock, and L*&Dgr; in which the agent adds at most one new belief to its set of beliefs at every tick of the clock, and show that these are sound, complete and decidable.