Rational Communication in Multi-Agent Environments
Autonomous Agents and Multi-Agent Systems
Reasoning about rationality and beliefs
Reasoning about rationality and beliefs
Essentials of Game Theory: A Concise, Multidisciplinary Introduction
Essentials of Game Theory: A Concise, Multidisciplinary Introduction
An Analysis of Entries in the First TAC Market Design Competition
WI-IAT '08 Proceedings of the 2008 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology - Volume 02
A framework for sequential planning in multi-agent settings
Journal of Artificial Intelligence Research
Strategy and mechanism lessons from the first ad auctions trading agent competition
Proceedings of the 11th ACM conference on Electronic commerce
Behavioral game theoretic models: a Bayesian framework for parameter analysis
Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems - Volume 2
A framework for modeling population strategies by depth of reasoning
Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems - Volume 2
Generalized and bounded policy iteration for finitely-nested interactive POMDPs: scaling up
Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems - Volume 2
Bayesian interaction shaping: learning to influence strategic interactions in mixed robotic domains
Proceedings of the 2013 international conference on Autonomous agents and multi-agent systems
Hi-index | 0.00 |
The field of multiagent decision making is extending its tools from classical game theory by embracing reinforcement learning, statistical analysis, and opponent modeling. For example, behavioral economists conclude from experimental results that people act according to levels of reasoning that form a "cognitive hierarchy" of strategies, rather than merely following the hyper-rational Nash equilibrium solution concept. This paper expands this model of the iterative reasoning process by widening the notion of a level within the hierarchy from one single strategy to a distribution over strategies, leading to a more general framework of multiagent decision making. It provides a measure of sophistication for strategies and can serve as a guide for designing good strategies for multiagent games, drawing it's main strength from predicting opponent strategies. We apply these lessons to the recently introduced Lemonade-stand Game, a simple setting that includes both collaborative and competitive elements, where an agent's score is critically dependent on its responsiveness to opponent behavior. The opening moves are significant to the end result and simple heuristics have achieved faster cooperation than intricate learning schemes. Using results from the past two real-world tournaments, we show how the submitted entries fit naturally into our model and explain why the top agents were successful.