Supervised interaction: creating a web of trust for contracting agents in electronic environments
Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 1
ICEC '03 Proceedings of the 5th international conference on Electronic commerce
Monopolizing markets by exploiting trust
AAMAS '06 Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems
An evolutionary approach to deception in multi-agent systems
Artificial Intelligence Review
A trust matrix model for electronic commerce
iTrust'03 Proceedings of the 1st international conference on Trust management
Supervised interaction: a form of contract management to create trust between agents
AAMAS'02 Proceedings of the 2002 international conference on Trust, reputation, and security: theories and practice
Hi-index | 0.00 |
We argue that it is important to analyze the role of trust and deception in interactions between agents in virtual societies. In particular, in hybrid situations where artificial agents interact with human agents it is important that those artificial agents can reason about the trustworthiness and deceptive actions of the human counter part. In order to support this interaction between agents in virtual societies a theory on trust and deception must be developed. In the literature a wide variety of theories on trust (less so on deception!) has been developed, but not specifically for virtual communities. Based on these earlier scientific results, we make a first attempt to develop a general theory on trust and deception for virtual communities, and we discuss a number of examples to illustrate which objectives such a theory should fulfill