Trust in Secure Communication Systems - The Concept, Representations, and Reasoning Techniques
AI '02 Proceedings of the 15th Australian Joint Conference on Artificial Intelligence: Advances in Artificial Intelligence
Minimal Belief Change and Pareto-Optimality
AI '99 Proceedings of the 12th Australian Joint Conference on Artificial Intelligence: Advanced Topics in Artificial Intelligence
AI '99 Proceedings of the 12th Australian Joint Conference on Artificial Intelligence: Advanced Topics in Artificial Intelligence
Logical Foundations for Reasoning about Trust in Secure Digital Communication
AI '01 Proceedings of the 14th Australian Joint Conference on Artificial Intelligence: Advances in Artificial Intelligence
Formal Analysis of Models for the Dynamics of Trust Based on Experiences
MAAMAW '99 Proceedings of the 9th European Workshop on Modelling Autonomous Agents in a Multi-Agent World: MultiAgent System Engineering
Decentralized trust management
SP'96 Proceedings of the 1996 IEEE conference on Security and privacy
A temporalised belief logic for specifying the dynamics of trust for multi-agent systems
ASIAN'04 Proceedings of the 9th Asian Computing Science conference on Advances in Computer Science: dedicated to Jean-Louis Lassez on the Occasion of His 5th Cycle Birthday
Hi-index | 0.00 |
A theory of trust for a given system is a set of rules that describes trust of agents in the system. In a certain logical framework, the theory is generally established based on the initial trust of agents in security mechanisms of the system. Such a theory provides a foundation for reasoning about agent beliefs as well as security properties that the system may satisfy. However, trust changes dynamically. When agents lose their trust or gain new trust, the theory established based on the initial trust of agents must be revised, otherwise it can no longer be used for any security purpose. This paper proposes a methodology for revising and managing dynamic theories of trust for agent based systems.