Steps towards a first-order logic of explicit and implicit belief
Proceedings of the 1986 Conference on Theoretical aspects of reasoning about knowledge
Knowledge and the problem of logical omniscience
Proceedings of the Second International Symposium on Methodologies for intelligent systems
Reasoning situated in time I: basic concepts
Journal of Experimental & Theoretical Artificial Intelligence
Artificial intelligence: a modern approach
Artificial intelligence: a modern approach
Logic and representation
Reasoning about knowledge
A nonstandard approach to the logical omniscience problem
Artificial Intelligence
Modal logic
A Deduction Model of Belief
Computationally Grounded Theories of Agency
ICMAS '00 Proceedings of the Fourth International Conference on MultiAgent Systems (ICMAS-2000)
A Complete and Decidable Logic for Resource-Bounded Agents
AAMAS '04 Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems - Volume 2
Artificial Intelligence Review
A Logic of Situated Resource-Bounded Agents
Journal of Logic, Language and Information
Logical omniscience as a computational complexity problem
Proceedings of the 12th Conference on Theoretical Aspects of Rationality and Knowledge
ArgMAS'04 Proceedings of the First international conference on Argumentation in Multi-Agent Systems
Logical omniscience via proof complexity
CSL'06 Proceedings of the 20th international conference on Computer Science Logic
Hi-index | 0.00 |
Logical approaches to reasoning about agents often rely on idealisations about belief ascription and logical omniscience which make it difficult to apply the results obtained to real agents. In this paper, we show how to ascribe beliefs and an ability to reason in an arbitrary decidable logic to an agent in a computationally grounded way. We characterise those cases in which the assumption that an agent is logically omniscient in a given logic is 'harmless' in the sense that it does not lead to making incorrect predictions about the agent, and show that such an assumption is not harmless when our predictions have a temporal dimension: 'now the agent believes p', and the agent requires time to derive the consequences of its beliefs. We present a family of logics for reasoning about the beliefs of an agent which is a perfect reasoner in an arbitrary decidable logic L but only derives the consequences of its beliefs after some delay &Dgr;. We investigate two members of this family in detail, L&Dgr; in which all the consequences are derived at the next tick of the clock, and L*&Dgr; in which the agent adds at most one new belief to its set of beliefs at every tick of the clock, and show that these are sound, complete and decidable.