Extracting reputation in multi agent systems by means of social network topology
Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 1
Belief Revision Process Based on Trust: Agents Evaluating Reputation of Information Sources
Proceedings of the workshop on Deception, Fraud, and Trust in Agent Societies held during the Autonomous Agents Conference: Trust in Cyber-societies, Integrating the Human and Artificial Perspectives
Trustworthiness of Information Sources and Information Pedigrees
ATAL '01 Revised Papers from the 8th International Workshop on Intelligent Agents VIII
A Computational Model of Trust and Reputation for E-businesses
HICSS '02 Proceedings of the 35th Annual Hawaii International Conference on System Sciences (HICSS'02)-Volume 7 - Volume 7
Trust and Reputation Model in Peer-to-Peer Networks
P2P '03 Proceedings of the 3rd International Conference on Peer-to-Peer Computing
Brain Meets Brawn: Why Grid and Agents Need Each Other
AAMAS '04 Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems - Volume 1
Reputation-based framework for high integrity sensor networks
Proceedings of the 2nd ACM workshop on Security of ad hoc and sensor networks
Review on Computational Trust and Reputation Models
Artificial Intelligence Review
Trusted intermediating agents in electronic trade networks
Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems
A survey of trust and reputation systems for online service provision
Decision Support Systems
Eliciting Informative Feedback: The Peer-Prediction Method
Management Science
An agent-based approach for trustworthy service location
AP2PC'02 Proceedings of the 1st international conference on Agents and peer-to-peer computing
Simulating the effect of reputation systems on E-markets
iTrust'03 Proceedings of the 1st international conference on Trust management
Soft security: isolating unreliable agents from society
AAMAS'02 Proceedings of the 2002 international conference on Trust, reputation, and security: theories and practice
Enforcing truthful strategies in incentive compatible reputation mechanisms
WINE'05 Proceedings of the First international conference on Internet and Network Economics
Decentralized reputation-based trust for assessing agent reliability under aggregate feedback
Trusting Agents for Trusting Electronic Societies
Aiding Human Reliance Decision Making Using Computational Models of Trust
WI-IATW '07 Proceedings of the 2007 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology - Workshops
Closed-loop adaptive decision support based on automated trust assessment
FAC'07 Proceedings of the 3rd international conference on Foundations of augmented cognition
Incorporating trust in networked production systems
Journal of Intelligent Manufacturing
Hi-index | 0.00 |
In open multi-agent systems, agents typically need to rely on others for the provision of information or the delivery of resources. However, since different agents' capabilities, goals and intentions do not necessarily agree with each other, trust can not be taken for granted in the sense that an agent can not always be expected to be willing and able to perform optimally from a focal agent's point of view. Instead, the focal agent has to form and update beliefs about other agents' capabilities and intentions. Many different approaches, models and techniques have been used for this purpose in the past, which generate trust and reputation values. In this paper, employing one particularly popular trust model, we focus on the way an agent may use such trust values in trust-based decision-making about the value of a binary variable. We use computer simulation experiments to assess the relative efficacy of a variety of decision-making methods. In doing so, we argue for systematic analysis of such methods beforehand, so that, based on an investigation of characteristics of different methods, different classes of parameter settings can be distinguished. Whether, on average across many random problem instances, a certain method performs better or worse than alternatives is not the issue, given that the agent using the method always exists in a particular setting. We find that combining trust values using our likelihood method gives performance which is relatively robust to changes in the setting an agent may find herself in.