An evidential model of distributed reputation management
Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 1
Incentive compatible mechanism for trust revelation
Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 1
AAMAS '02 Revised Papers from the Workshop on Agent Mediated Electronic Commerce on Agent-Mediated Electronic Commerce IV, Designing Mechanisms and Systems
Detecting deception in reputation management
AAMAS '03 Proceedings of the second international joint conference on Autonomous agents and multiagent systems
Reputation Mechanism Design in Online Trading Environments with Pure Moral Hazard
Information Systems Research
Minimum payments that reward honest reputation feedback
EC '06 Proceedings of the 7th ACM conference on Electronic commerce
An incentives' mechanism promoting truthful feedback in peer-to-peer systems
CCGRID '05 Proceedings of the Fifth IEEE International Symposium on Cluster Computing and the Grid - Volume 01
Eliciting Informative Feedback: The Peer-Prediction Method
Management Science
Truthful opinions from the crowds
ACM SIGecom Exchanges
Journal of Artificial Intelligence Research
Evidence-based trust: A mathematical model geared for multiagent systems
ACM Transactions on Autonomous and Adaptive Systems (TAAS)
Reputation in multi agent systems and the incentives to provide feedback
MATES'10 Proceedings of the 8th German conference on Multiagent system technologies
A probabilistic approach for maintaining trust based on evidence
Journal of Artificial Intelligence Research
CRM: An efficient trust and reputation model for agent computing
Knowledge-Based Systems
Analyzing Communities of Web Services Using Incentives
International Journal of Web Services Research
An Inspection Game to Provide Incentive for Cooperation with Corrupted Inspectors
ASONAM '12 Proceedings of the 2012 International Conference on Advances in Social Networks Analysis and Mining (ASONAM 2012)
Hi-index | 0.00 |
Reputation mechanisms offer an effective alternative to verification authorities for building trust in electronic markets with moral hazard. Future clients guide their business decisions by considering the feedback from past transactions; if truthfully exposed, cheating behavior is sanctioned and thus becomes irrational. It therefore becomes important to ensure that rational clients have the right incentives to report honestly. As an alternative to side-payment schemes that explicitly reward truthful reports, we show that honesty can emerge as a rational behavior when clients have a repeated presence in the market. To this end we describe a mechanism that supports an equilibrium where truthful feedback is obtained. Then we characterize the set of pareto-optimal equilibria of the mechanism, and derive an upper bound on the percentage of false reports that can be recorded by the mechanism. An important role in the existence of this bound is played by the fact that rational clients can establish a reputation for reporting honestly.