Cooperation in stochastic games through communication

  • Authors:
  • Raghav Aras;Alain Dutech;François Charpillet

  • Affiliations:
  • Loria \' INRIA-Lorraine, France;Loria \' INRIA-Lorraine, France;Loria \' INRIA-Lorraine, France

  • Venue:
  • Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

The application of reinforcement learning principles to the search of equilibrium policies in stochastic games (SGs) has met with some success ([3], [4], [2]). The key insight of this approach is that each agent can learn his own ß-discounted reward equilibrium policy by keeping track of Q-values of all the agents including himself, and considering the Q-value matrix for each state as his payoff matrix. Each agent sees what actions other agents take, and what payoffs they receive. There is some evidence that in practice, agents that do not observe the actions and payoffs of other agents (hereby denoted as imperfectly observing agents), can still learn adversarial equilibrium (AE) policies in general-sum SGs ([1]) using naive Q-learning. Considering the Prisoners' Dilemma stage game (Table 1) as an abstraction of a SG, this implies that, even by ignoring other agents' play, agents still learn to play DD, which is the adversarial equilibrium joint action. The payoff received in DD can be thought of as each agent's security level.