Bayesian Reinforcement Learning for Coalition Formation under Uncertainty
AAMAS '04 Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems - Volume 3
Agent influence as a predictor of difficulty for decentralized problem-solving
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 1
Coalitional bargaining with agent type uncertainty
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Methods for task allocation via agent coalition formation
Artificial Intelligence
Multi-robot coalition formation
IEEE Transactions on Robotics
Hi-index | 0.00 |
Coalition formation algorithms are generally not applicable to real-world robotic collectives since they lack mechanisms to handle uncertainty. Those mechanisms that do address uncertainty either deflect it by soliciting information from others or apply reinforcement learning to select an agent type from within a set. This paper presents a coalition formation mechanism that directly addresses uncertainty while allowing the agent types to fall outside of a known set. The agent types are captured through a novel agent modeling technique that handles uncertainty through a belief-based evaluation mechanism. This technique allows for uncertainty in environmental data, agent type, coalition value, and agent cost. An investigation of both the effects of adding agents on processing time and of model quality on the convergence rate of initial agent models (and thereby coalition quality) is provided. This approach handles uncertainty on a larger scale than previous work and provides a mechanism readily applied to a dynamic collective of real-world robots.