Active perception and reinforcement learning
Proceedings of the seventh international conference (1990) on Machine learning
Technical Note: \cal Q-Learning
Machine Learning
Multiagent Coordination with Learning Classifier Systems
IJCAI '95 Proceedings of the Workshop on Adaption and Learning in Multi-Agent Systems
Learning to coordinate actions in multi-agent systems
IJCAI'93 Proceedings of the 13th international joint conference on Artifical intelligence - Volume 1
Hi-index | 0.00 |
In multi-agent reinforcement learning systems, it is important to share a reward among all agents. We focus on the Rationality Theorem of Profit Sharing [5] and analyze how to share a reward among all profit sharing agents. When an agent gets a direct reward R (R 0), an indirect reward µR (µ ≥ 0) is given to the other agents. We have derived the necessary and sufficient condition to preserve the rationality as follows; µ M - 1/MW(1-(1/M)W0(n-1)L Where M and L are the maximum number of conflicting all rules and rational rules in the same sensory input, W and W0 are the maximum episode length of a direct and an indirect-reward agents, and n is the number of agents. This theory is derived by avoiding the least desirable situation whose expected reward per an action is zero. Therefore, if we use this theorem, we can experience several efficient aspects of reward sharing. Through numerical examples, we confirm the effectiveness of this theorem.