Notions of reputation in multi-agents systems: a review
Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 1
The Eigentrust algorithm for reputation management in P2P networks
WWW '03 Proceedings of the 12th international conference on World Wide Web
Obfuscation of executable code to improve resistance to static disassembly
Proceedings of the 10th ACM conference on Computer and communications security
Architecture and algorithms for a distributed reputation system
iTrust'03 Proceedings of the 1st international conference on Trust management
Towards incentive-compatible reputation management
AAMAS'02 Proceedings of the 2002 international conference on Trust, reputation, and security: theories and practice
How social structure improves distributed reputation systems: three hypotheses
AP2PC'04 Proceedings of the Third international conference on Agents and Peer-to-Peer Computing
A taxonomy of incentive patterns
AP2PC'03 Proceedings of the Second international conference on Agents and Peer-to-Peer Computing
MobiDE '07 Proceedings of the 6th ACM international workshop on Data engineering for wireless and mobile access
A Personalized Approach to Experience-Aware Service Ranking and Selection
SUM '08 Proceedings of the 2nd international conference on Scalable Uncertainty Management
iTrust'06 Proceedings of the 4th international conference on Trust Management
Hi-index | 0.00 |
Autonomous entities in artificial societies are only willing to cooperate with entities they trust. Reputation systems keep track of the entities' behavior and, thus, are a widely used means to support trust formation. In a P2P network, the reputation system needs to be distributed to the individual entities. In previous work, we have shown that some of the limitations of distributed reputation systems can be overcome by making use of hard evidence. In this paper, we take this idea one step further by deriving beliefs of others' trustworthiness from one's own experiences and the available hard evidence. For this purpose, we justify why a self-interested autonomous entity may choose to behave according to the norms of the system designer. As a consequence, the proposed belief model does not only incorporate behavioral beliefs but also beliefs regarding the normativeness of an entity. We prescribe how beliefs are revised if new evidence becomes available. The introduced models for recommendations and belief formation enable us to prove that self-interested entities always issue truthful recommendations regarding transactional behavior. The simulative evaluation shows that a self-interested entity can be expected to be normative and, thus, to comply with our system design.