Notions of reputation in multi-agents systems: a review
Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 1
Trust and Distrust Definitions: One Bite at a Time
Proceedings of the workshop on Deception, Fraud, and Trust in Agent Societies held during the Autonomous Agents Conference: Trust in Cyber-societies, Integrating the Human and Artificial Perspectives
Limited reputation sharing in P2P systems
EC '04 Proceedings of the 5th ACM conference on Electronic commerce
Using Policies for Information Valuation to Justify Beliefs
AAMAS '04 Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems - Volume 1
CAT: a context-aware trust model for open and dynamic systems
Proceedings of the 2008 ACM symposium on Applied computing
A robust methodology for prediction of trust and reputation values
Proceedings of the 2008 ACM workshop on Secure web services
Generalizing Trust: Inferencing Trustworthiness from Categories
Trust in Agent Societies
Security applications of trust in multi-agent systems
Journal of Computer Security
On exploiting agent technology in the design of peer-to-peer applications
AP2PC'04 Proceedings of the Third international conference on Agents and Peer-to-Peer Computing
A new view on normativeness in distributed reputation systems: beyond behavioral beliefs
AP2PC'05 Proceedings of the 4th international conference on Agents and Peer-to-Peer Computing
Building and using social structures: A case study using the agent ART testbed
ACM Transactions on Intelligent Systems and Technology (TIST) - Special section on agent communication, trust in multiagent systems, intelligent tutoring and coaching systems
Credible recommendation exchange mechanism for P2P reputation systems
Proceedings of the 28th Annual ACM Symposium on Applied Computing
Multimedia Tools and Applications
Hi-index | 0.00 |
Traditional centralized approaches to security are difficult to apply to large, distributed, multi-agent systems. Developing a notion of trust that is based on the reputation of agents can provide a softer notion of security that is sufficient for many MAS applications. However, designing a reliable and "trustworthy" reputation mechanism is not a trivial problem. In this paper, we address the issue of incentive-compatibility, i.e. why should agents report reputation information and why should they report it truthfully. By introducing a side-payment scheme organized through a set of broker agents we make it rational for software agents to truthfully share the reputation information they have acquired in their past experience. The theoretical results obtained were verified by a simple simulation. We conclude by making an analysis of the robustness of the system in the presence of an increasing percentage of lying agents.