Multiagent learning using a variable learning rate
Artificial Intelligence
Implicit Negotiation in Repeated Games
ATAL '01 Revised Papers from the 8th International Workshop on Intelligent Agents VIII
Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations
Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations
Learning against opponents with bounded memory
IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
Hi-index | 0.00 |
In infinitely repeated games, the act of teaching an outcome to our adversaries can be beneficial to reach coordination, as well as allowing us to 'steer' adversaries to outcomes that are more beneficial to us. Teaching works well against followers, agents that are willing to go along with the proposal, but can lead to miscoordination otherwise. In the context of infinitely repeated games there is, as of yet, no clear formalism that tries to capture and combine these behaviours into a unified view in order to reach a solution of a game. In this paper, we propose such a formalism in the form of an algorithmic criterion, which uses the concept of targeted learning. As we will argue, this criterion can be a beneficial criterion to adopt in order to reach coordination. Afterwards we propose an algorithm that adheres to our criterion that is able to teach pure strategy Nash Equilibria to a broad class of opponents in a broad class of games and is able to follow otherwise, as well as able to perform well in self-play.