Reducing bias and inefficiency in the selection algorithm
Proceedings of the Second International Conference on Genetic Algorithms on Genetic algorithms and their application
REGRET: reputation in gregarious societies
Proceedings of the fifth international conference on Autonomous agents
Notions of reputation in multi-agents systems: a review
Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 1
An evidential model of distributed reputation management
Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 1
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Evolutionary Optimization in Dynamic Environments
Evolutionary Optimization in Dynamic Environments
Supporting Trust in Virtual Communities
HICSS '00 Proceedings of the 33rd Hawaii International Conference on System Sciences-Volume 6 - Volume 6
Social ReGreT, a reputation model based on social relations
ACM SIGecom Exchanges - Chains of commitment
Principles of Trust for MAS: Cognitive Anatomy, Social Importance, and Quantification
ICMAS '98 Proceedings of the 3rd International Conference on Multi Agent Systems
Trust in information sources as a source for trust: a fuzzy approach
AAMAS '03 Proceedings of the second international joint conference on Autonomous agents and multiagent systems
The Knowledge Engineering Review
An integrated trust and reputation model for open multi-agent systems
Autonomous Agents and Multi-Agent Systems
Learning trust strategies in reputation exchange networks
AAMAS '06 Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems
Computing Confidence Values: Does Trust Dynamics Matter?
EPIA '09 Proceedings of the 14th Portuguese Conference on Artificial Intelligence: Progress in Artificial Intelligence
Engaging the dynamics of trust in computational trust and reputation systems
KES-AMSTA'10 Proceedings of the 4th KES international conference on Agent and multi-agent systems: technologies and applications, Part I
A probabilistic approach for maintaining trust based on evidence
Journal of Artificial Intelligence Research
A trust model for supply chain management
The 10th International Conference on Autonomous Agents and Multiagent Systems - Volume 3
Reputation Based Dynamic Responsibility to Agent Assignement for Critical Infrastructure
WI-IAT '11 Proceedings of the 2011 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology - Volume 02
TATM: a trust mechanism for social traders in double auctions
AI'11 Proceedings of the 24th international conference on Advances in Artificial Intelligence
A trust and reputation model for supply chain management
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Three
PRep: a probabilistic reputation model for biased societies
Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems - Volume 1
A strategic reputation-based mechanism for mobile ad hoc networks
Canadian AI'12 Proceedings of the 25th Canadian conference on Advances in Artificial Intelligence
Proceedings of the 6th International Conference on Security of Information and Networks
Macau: a basis for evaluating reputation systems
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Hi-index | 0.00 |
For agents to collaborate in open multi-agent systems, each agent must trust in the other agents' ability to complete tasks and willingness to cooperate. Agents need to decide between cooperative and opportunistic behavior based on their assessment of another agents' trustworthiness. In particular, an agent can have two beliefs about a potential partner that tend to indicate trustworthiness: that the partner is competent and that the partner expects to engage in future interactions. This paper explores an approach that models competence as an agent's probability of successfully performing an action, and models belief in future interactions as a discount factor. We evaluate the underlying decision framework's performance given accurate knowledge of the model's parameters in an evolutionary game setting. We then introduce a game-theoretic framework in which an agent can learn a model of another agent online, using the Harsanyi transformation. The learning agents evaluate a set of competing hypotheses about another agent during the simulated play of an indefinitely repeated game. The Harsanyi strategy is shown to demonstrate robust and successful online play against a variety of static, classic, and learning strategies in a variable-payoff Iterated Prisoner's Dilemma setting.