Proceedings of the 24th ACM SIGPLAN-SIGACT symposium on Principles of programming languages
Communications of the ACM
Managing trust in a peer-2-peer information system
Proceedings of the tenth international conference on Information and knowledge management
Choosing reputable servents in a P2P network
Proceedings of the 11th international conference on World Wide Web
King: estimating latency between arbitrary internet end hosts
Proceedings of the 2nd ACM SIGCOMM Workshop on Internet measurment
The Eigentrust algorithm for reputation management in P2P networks
WWW '03 Proceedings of the 12th international conference on World Wide Web
Introduction: Service-oriented computing
Communications of the ACM - Service-oriented computing
P-Grid: a self-organizing structured P2P system
ACM SIGMOD Record
PeerTrust: Supporting Reputation-Based Trust for Peer-to-Peer Electronic Communities
IEEE Transactions on Knowledge and Data Engineering
Reputation Mechanism Design in Online Trading Environments with Pure Moral Hazard
Information Systems Research
Trust evaluation through relationship analysis
Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems
Experiences creating three implementations of the repast agent modeling toolkit
ACM Transactions on Modeling and Computer Simulation (TOMACS)
Minimum payments that reward honest reputation feedback
EC '06 Proceedings of the 7th ACM conference on Electronic commerce
Understanding churn in peer-to-peer networks
Proceedings of the 6th ACM SIGCOMM conference on Internet measurement
A survey of trust and reputation systems for online service provision
Decision Support Systems
Eliciting Informative Feedback: The Peer-Prediction Method
Management Science
Trust on the world wide web: a survey
Foundations and Trends in Web Science
A Probabilistic Framework for Decentralized Management of Trust and Quality
CIA '07 Proceedings of the 11th international workshop on Cooperative Information Agents XI
Effective Usage of Computational Trust Models in Rational Environments
WI-IAT '08 Proceedings of the 2008 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology - Volume 01
P2P reputation management: Probabilistic estimation vs. social networks
Computer Networks: The International Journal of Computer and Telecommunications Networking - Management in peer-to-peer systems
Incentive-based robust reputation mechanism for p2p services
OPODIS'06 Proceedings of the 10th international conference on Principles of Distributed Systems
On honesty in sovereign information sharing
EDBT'06 Proceedings of the 10th international conference on Advances in Database Technology
Hi-index | 0.00 |
Computational reputation-based trust models using statistical learning have been intensively studied for distributed systems where peers behave maliciously. However practical applications of such models in environments with both malicious and rational behaviors are still very little understood. In this article, we study the relation between their accuracy measures and their ability to enforce cooperation among participants and discourage selfish behaviors. We provide theoretical results that show the conditions under which cooperation emerges when using computational trust models with a given accuracy, and how cooperation can still be sustained while reducing the cost and accuracy of those models. Specifically, we propose a peer selection protocol that uses a computational trust model as a dishonesty detector to filter out unfair ratings. We prove that such a model with reasonable misclassification error bound in identifying malicious ratings can effectively build trust and cooperation in the system, considering rationality of participants. These results reveal two interesting observations. First, the key to the success of a reputation system in a rational environment is not a sophisticated trust-learning mechanism, but an effective identity-management scheme to prevent whitewashing behaviors. Second, given an appropriate identity-management mechanism, a reputation-based trust model with a moderate accuracy bound can be used to effectively enforce cooperation in systems with both rational and malicious participants. As a result, in heterogeneous environments where peers use different algorithms to detect misbehavior of potential partners, cooperation may still emerge. We verify and extend these theoretical results to a variety of settings involving honest, malicious, and strategic players through extensive simulation. These results will enable a much more targeted, cost-effective and realistic design for decentralized trust management systems, such as needed for peer-to-peer, electronic commerce, or community systems.