Algebraic laws for nondeterminism and concurrency
Journal of the ACM (JACM)
All I know: a study in autoepistemic logic
Artificial Intelligence
Towards a theory of knowledge and ignorance: preliminary report
Logics and models of concurrent systems
Monotonic and non-monotonic logics of knowledge
Fundamenta Informaticae - Special issue: logics for artificial intelligence
Minimal knowledge problem: a new approach
Artificial Intelligence
Persistence and minimality in epistemic logic
Annals of Mathematics and Artificial Intelligence
All they know: a study in multi-agent autoepistemic reasoning
IJCAI'93 Proceedings of the 13th international joint conference on Artifical intelligence - Volume 1
A model-theoretic analysis of monotonic knowledge
IJCAI'85 Proceedings of the 9th international joint conference on Artificial intelligence - Volume 1
A Logical Framework for Knowledge Sharing in Multi-agent Systems
COCOON '01 Proceedings of the 7th Annual International Conference on Computing and Combinatorics
Hi-index | 0.00 |
We extend our general approach to characterizing information to multi-agent systems. In particular, we provide a formal description of an agent's knowledge containing exactly the information conveyed by some (honest) formula ϕ. Only knowing is important for dynamic agent systems in two ways. First of all, one wants to compare different states of knowledge of an agent and, secondly, for agent a's decisions, it may be relevant that (he knows that) agent b does not know more than ϕ. There are three ways to study the question whether a formula ϕ can be interpreted as minimal information. The first method is semantic and inspects 'minimal' models for ϕ (with respect to some order ≤ on states). The second one is syntactic and searches for stable expansions, minimal with respect to some language L*. The third method is a deductive test, known as the disjunction property. We present a condition under which the three methods are equivalent. Then, we show how to construct the order ≤ by collecting 'layered orders'. We then focus on the multi-agent case and identify languages L* for several orders ≤, and show how they yield different notions of honesty for different multi-modal systems. Finally, some consequences of the different notions are discussed.