Planning and acting in partially observable stochastic domains
Artificial Intelligence
The Complexity of Decentralized Control of Markov Decision Processes
Mathematics of Operations Research
Multiagent Reinforcement Learning: Theoretical Framework and an Algorithm
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Sequential Optimality and Coordination in Multiagent Systems
IJCAI '99 Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence
Approximating state estimation in multiagent settings using particle filters
Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems
Shaping multi-agent systems with gradient reinforcement learning
Autonomous Agents and Multi-Agent Systems
Emergent architecture in self organized swarm systems for military applications
Proceedings of the 10th annual conference companion on Genetic and evolutionary computation
Interactive relational reinforcement learning of concept semantics
Machine Learning
Hi-index | 0.00 |
This paper presents properties and results of a new framework for sequential decision-making in multiagent settings called interactive partially observable Markov decision processes (I-POMDPs). I-POMDPs are generalizations of POMDPs, a well-known framework for decision-theoretic planning in uncertain domains, to cases when an agent needs to plan a course of action in an environment populated by other agents.