The complexity of reasoning about knowledge and time
STOC '86 Proceedings of the eighteenth annual ACM symposium on Theory of computing
A guide to completeness and complexity for modal logics of knowledge and belief
Artificial Intelligence
Reasoning about knowledge
Modal logic
Epistemic Logic for AI and Computer Science
Epistemic Logic for AI and Computer Science
The Logic of Objective Knowledge and Rational Belief
JELIA '90 Proceedings of the European Workshop on Logics in AI
Computationally Grounded Theories of Agency
ICMAS '00 Proceedings of the Fourth International Conference on MultiAgent Systems (ICMAS-2000)
Reasoning about Uncertainty
A computationally grounded logic of knowledge, belief and certainty
Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems
Interactive unawareness revisited
TARK '05 Proceedings of the 10th conference on Theoretical aspects of rationality and knowledge
Modeling belief in dynamic systems part II: revision and update
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
The interpreted system model offers a computationally grounded model, in terms of the states of computer processes, to S5 epistemic logics. This paper extends the interpreted system model, and provides a computationally grounded one, called the interpreted perception system model, to those episternic logics other than S5. It is usually assumed, in the interpreted system model, that those parts of the environment that are visible to an agent are correctly perceived by the agent as a whole. The essential idea of the interpreted perception system model is that an agent may have incorrect perception or observations to the visible parts of the environment and the agent may not be aware of this. The notion of knowledge can be defined so that an agent knows a statement iff the statement holds in those states that the agent can not distinguish (from the current state) by using only her correct observations. We establish a logic of knowledge and certainty, called KC logic, with a sound and complete proof system. The knowledge modality in this logic is S4 valid. It becomes S5 if we assume an agent always has correct observations; and more interestingly, it can be S4.2 or S4.3 under other natural constraints on agents and their sensors to the environment.