A layered approach to learning coordination knowledge in multiagent environments
Applied Intelligence
Multi-robot Cooperation Based on Hierarchical Reinforcement Learning
ICCS '07 Proceedings of the 7th international conference on Computational Science, Part III: ICCS 2007
Subgoal Identification for Reinforcement Learning and Planning in Multiagent Problem Solving
MATES '07 Proceedings of the 5th German conference on Multiagent System Technologies
Multi-Agent Reinforcement Learning for Intrusion Detection: A Case Study and Evaluation
MATES '08 Proceedings of the 6th German conference on Multiagent System Technologies
Learning of coordination: exploiting sparse interactions in multiagent systems
Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems - Volume 2
A hybrid approach to multi-agent decision-making
Proceedings of the 2008 conference on ECAI 2008: 18th European Conference on Artificial Intelligence
Control plane algorithms targeting challenging autonomic properties in grey systems
SMC'09 Proceedings of the 2009 IEEE international conference on Systems, Man and Cybernetics
Using graph analysis to study networks of adaptive agent
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
Learning multi-agent state space representations
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
Decentralized MDPs with sparse interactions
Artificial Intelligence
Coordination with collective and individual decisions
IBERAMIA-SBIA'06 Proceedings of the 2nd international joint conference, and Proceedings of the 10th Ibero-American Conference on AI 18th Brazilian conference on Advances in Artificial Intelligence
Mosaic for multiple-reward environments
Neural Computation
Integrated control of semi-active suspension and electric power steering based on multi-agent system
International Journal of Bio-Inspired Computation
A comparison of a communication strategies in cooperative learning
Proceedings of the 14th annual conference on Genetic and evolutionary computation
Coordination guided reinforcement learning
Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems - Volume 1
Exploiting independent relationships in multiagent systems for coordinated learning
PRICAI'12 Proceedings of the 12th Pacific Rim international conference on Trends in Artificial Intelligence
Machine learning for interactive systems and robots: a brief introduction
Proceedings of the 2nd Workshop on Machine Learning for Interactive Systems: Bridging the Gap Between Perception, Action and Communication
TESLA: an extended study of an energy-saving agent that leverages schedule flexibility
Autonomous Agents and Multi-Agent Systems
Hi-index | 0.00 |
In this paper, we investigate the use of hierarchical reinforcement learning (HRL) to speed up the acquisition of cooperative multi-agent tasks. We introduce a hierarchical multi-agent reinforcement learning (RL) framework, and propose a hierarchical multi-agent RL algorithm called Cooperative HRL. In this framework, agents are cooperative and homogeneous (use the same task decomposition). Learning is decentralized, with each agent learning three interrelated skills: how to perform each individual subtask, the order in which to carry them out, and how to coordinate with other agents. We define cooperative subtasks to be those subtasks in which coordination among agents significantly improves the performance of the overall task. Those levels of the hierarchy which include cooperative subtasks are called cooperation levels. A fundamental property of the proposed approach is that it allows agents to learn coordination faster by sharing information at the level of cooperative subtasks, rather than attempting to learn coordination at the level of primitive actions. We study the empirical performance of the Cooperative HRL algorithm using two testbeds: a simulated two-robot trash collection task, and a larger four-agent automated guided vehicle (AGV) scheduling problem. We compare the performance and speed of Cooperative HRL with other learning algorithms, as well as several well-known industrial AGV heuristics. We also address the issue of rational communication behavior among autonomous agents in this paper. The goal is for agents to learn both action and communication policies that together optimize the task given a communication cost. We extend the multi-agent HRL framework to include communication decisions and propose a cooperative multi-agent HRL algorithm called COM-Cooperative HRL. In this algorithm, we add a communication level to the hierarchical decomposition of the problem below each cooperation level. Before an agent makes a decision at a cooperative subtask, it decides if it is worthwhile to perform a communication action. A communication action has a certain cost and provides the agent with the actions selected by the other agents at a cooperation level. We demonstrate the efficiency of the COM-Cooperative HRL algorithm as well as the relation between the communication cost and the learned communication policy using a multi-agent taxi problem.