Extracting reputation in multi agent systems by means of social network topology
Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 1
Reputation and social network analysis in multi-agent systems
Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 1
Multiagent System Engineering: The Coordination Viewpoint
ATAL '99 6th International Workshop on Intelligent Agents VI, Agent Theories, Architectures, and Languages (ATAL),
Coalition Formation for Large-Scale Electronic Markets
ICMAS '00 Proceedings of the Fourth International Conference on MultiAgent Systems (ICMAS-2000)
Believing Others: Pros and Cons
ICMAS '00 Proceedings of the Fourth International Conference on MultiAgent Systems (ICMAS-2000)
Multiagent Systems for resource allocation in Peer-to-Peer systems
WISICT '04 Proceedings of the winter international synposium on Information and communication technologies
The Knowledge Engineering Review
Tunably decentralized algorithms for cooperative target observation
Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems
Reciprocal negotiation over shared resources in agent societies
Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems
A collaboration mechanism on positive interactions in multi-agent environments
IJCAI'93 Proceedings of the 13th international joint conference on Artifical intelligence - Volume 1
Formal trust model for multiagent systems
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 1
Hi-index | 0.00 |
Cooperation among agents is critical for agents' Artificial Intelligence (AI). In multi-agent system (MAS), agents cooperate with each other for long-term return and build such partnership in most of the time. However, the partnership could be broken easily if one agent did not or refused to grant a favor to another. Will it be helpful to MAS or individual agent, if agent has controllable level of tolerance? That is the main question of this paper. In order to find an answer, we propose a cooperative strategy, "flexible reciprocal altruism model (FRAM)". In FRAM, agent has a controllable rate of tolerance and is willing to grant favors for long-term return. Agent can determine whether to grant a favor to another based on their past interactions. As a result, granting unmatched favors by accident will not break the relationship between two agents immediately. Experiments show that our strategy performs well with different cost/value tradeoffs, numbers of agents, and load.