REGRET: reputation in gregarious societies
Proceedings of the fifth international conference on Autonomous agents
Consumer trust in an Internet store
Information Technology and Management
A Social Mechanism of Reputation Management in Electronic Communities
CIA '00 Proceedings of the 4th International Workshop on Cooperative Information Agents IV, The Future of Information Agents in Cyberspace
Detecting deception in reputation management
AAMAS '03 Proceedings of the second international joint conference on Autonomous agents and multiagent systems
Robust incentive techniques for peer-to-peer networks
EC '04 Proceedings of the 5th ACM conference on Electronic commerce
Review on Computational Trust and Reputation Models
Artificial Intelligence Review
The Knowledge Engineering Review
Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems
Coping with inaccurate reputation sources: experimental analysis of a probabilistic trust model
Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems
An integrated trust and reputation model for open multi-agent systems
Autonomous Agents and Multi-Agent Systems
Towards con-resistant trust models for distributed agent systems
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Hi-index | 0.00 |
Autonomous agents require trust and reputation concepts in order to identify communities of agents with which to interact reliably in ways analogous to humans. Agent societies are invariably heterogeneous, with multiple decision making policies and actions governing their behaviour. Through the introduction of naive agents, this paper shows empirically that while learning agents can identify malicious agents through direct interaction, naive agents compromise utility through their inability to discern malicious agents. Moreover, the impact of the proportion of naive agents on the society is analyzed. The paper demonstrates that there is a need for witness interaction trust to detect naive agents in addition to the need for direct interaction trust to detect malicious agents. By proposing a set of policies, the paper demonstrates how learning agents can isolate themselves from naive and malicious agents.