Agents' cooperation based on long-term reciprocal altruism

  • Authors:
  • Xiaowei Zhao;Haoxiang Xia;Hong Yu;Linlin Tian

  • Affiliations:
  • School of Software Technology, Institute of Systems Engineering, Dalian University of Technology, Dalian, China;School of Software Technology, Institute of Systems Engineering, Dalian University of Technology, Dalian, China;School of Software Technology, Institute of Systems Engineering, Dalian University of Technology, Dalian, China;School of Software Technology, Institute of Systems Engineering, Dalian University of Technology, Dalian, China

  • Venue:
  • IEA/AIE'12 Proceedings of the 25th international conference on Industrial Engineering and Other Applications of Applied Intelligent Systems: advanced research in applied artificial intelligence
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Cooperation among agents is critical for agents' Artificial Intelligence (AI). In multi-agent system (MAS), agents cooperate with each other for long-term return and build such partnership in most of the time. However, the partnership could be broken easily if one agent did not or refused to grant a favor to another. Will it be helpful to MAS or individual agent, if agent has controllable level of tolerance? That is the main question of this paper. In order to find an answer, we propose a cooperative strategy, "flexible reciprocal altruism model (FRAM)". In FRAM, agent has a controllable rate of tolerance and is willing to grant favors for long-term return. Agent can determine whether to grant a favor to another based on their past interactions. As a result, granting unmatched favors by accident will not break the relationship between two agents immediately. Experiments show that our strategy performs well with different cost/value tradeoffs, numbers of agents, and load.