Social trust: a cognitive approach
Trust and deception in virtual societies
Trust Is Much More than Subjective Probability: Mental Components and Sources of Trust
HICSS '00 Proceedings of the 33rd Hawaii International Conference on System Sciences-Volume 6 - Volume 6
The Knowledge Engineering Review
Incorporating trust into the BDI architecture
International Journal of Artificial Intelligence and Soft Computing
Trust in LORA: towards a formal definition of trust in BDI agents
KES'06 Proceedings of the 10th international conference on Knowledge-Based Intelligent Information and Engineering Systems - Volume Part II
Agent cooperation and collaboration
KES'06 Proceedings of the 10th international conference on Knowledge-Based Intelligent Information and Engineering Systems - Volume Part II
Innovations in intelligent agents
KES'05 Proceedings of the 9th international conference on Knowledge-Based Intelligent Information and Engineering Systems - Volume Part II
Hi-index | 0.00 |
Trust plays a fundamental role in multi-agent systems in which tasks are delegated or agents must rely on others to perform actions that they themselves cannot do. The concept of trust may be generalised and considered as a level of confidence in one's predictions of another agent's future behaviour. This has applicability beyond that normally ascribed to trust: for instance, one may be confident that a particular agent's intentions are hostile, and that this will be borne out by particular behaviours. In this paper we present a cognitive model of trust in which the central component is a Belief-Desire-Intention model or 'theory of mind' of a person or agent that evolves over time.