Emergence of stable coalitions via task exchanges
Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 1
The evolution and stability of cooperative traits
Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 3
Agent-Based Simulation in the Study of Social Dilemmas
Artificial Intelligence Review
Agents in E-commerce: state of the art
Knowledge and Information Systems
Helping based on future expectations
AAMAS '03 Proceedings of the second international joint conference on Autonomous agents and multiagent systems
Stable repeated strategies for information exchange between two autonomous agents
Artificial Intelligence
The Knowledge Engineering Review
Honesty and trust revisited: the advantages of being neutral about other's cognitive models
Autonomous Agents and Multi-Agent Systems
Reputation in the joint venture game
Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems
An Anticipatory Trust Model for Open Distributed Systems
Anticipatory Behavior in Adaptive Learning Systems
The Effect of Mediated Partnerships in Two-Sided Economic Search
CIA '07 Proceedings of the 11th international workshop on Cooperative Information Agents XI
Reputation in the venture games
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 2
Cooperative learning using advice exchange
Adaptive agents and multi-agent systems
Modeling cooperation in multi-agent communities
Cognitive Systems Research
Cognitive Systems Research
Agents' cooperation based on long-term reciprocal altruism
IEA/AIE'12 Proceedings of the 25th international conference on Industrial Engineering and Other Applications of Applied Intelligent Systems: advanced research in applied artificial intelligence
Hi-index | 0.00 |
In open environments, there is no central control over agent behaviors. On the contrary, agents in such systems can be assumed to be primarily driven by self-interests. Under the assumption that agents remain in the system for significant periods, or that the agent composition changes only slowly, we have previously presented a prescriptive strategy for promoting and sustaining cooperation among self-interested agents. The adaptive, probabilistic policy we have prescribed promotes reciprocate cooperation that improves both individual and group performance in the end. In the short run, however, selfish agents could still exploit reciprocate agents. In this paper, we evaluate the hypothesis that the exploitative tendencies of selfish agents can be effectively curbed if reciprocate agents share their 驴opinions驴 of other agents. Since the true nature of agents are not known a priori and is learned from experience, believing others can also pose other hazards. We provide a learned trust-based evaluation function that is shown to resist both individual and concerted deception on the part of selfish agents.