Multiagent Robotic Systems
Cooperative Mobile Robotics: Antecedents and Directions
Autonomous Robots
Reinforcement Learning in the Multi-Robot Domain
Autonomous Robots
Autonomous Robots
Multiagent Systems: A Survey from a Machine Learning Perspective
Autonomous Robots
Moving furniture with teams of autonomous robots
IROS '95 Proceedings of the International Conference on Intelligent Robots and Systems-Volume 1 - Volume 1
2001 IEEE International Conference on Acoustics, Speech, and Signal Processing
ICASSP '01 Proceedings of the Acoustics, Speech, and Signal Processing, 200. on IEEE International Conference - Volume 02
Adaptive action selection without explicit communication formultirobot box-pushing
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
Value-function reinforcement learning in Markov games
Cognitive Systems Research
A Multi-agent Architecture for Multi-robot Surveillance
ICCCI '09 Proceedings of the 1st International Conference on Computational Collective Intelligence. Semantic Web, Social Networks and Multiagent Systems
The world of independent learners is not markovian
International Journal of Knowledge-based and Intelligent Engineering Systems
Quantitative and qualitative coordination for multi-robot systems
AICI'12 Proceedings of the 4th international conference on Artificial Intelligence and Computational Intelligence
Engineering Applications of Artificial Intelligence
Backward Q-learning: The combination of Sarsa algorithm and Q-learning
Engineering Applications of Artificial Intelligence
Hi-index | 0.00 |
This paper presents a machine-learning approach to the multi-robot coordination problem in an unknown dynamic environment. A multi-robot object transportation task is employed as the platform to assess and validate this approach. Specifically, a flexible two-layer multi-agent architecture is developed to implement multi-robot coordination. In this architecture, four software agents form a high-level coordination subsystem while two heterogeneous robots constitute the low-level control subsystem. Two types of machine learning-reinforcement learning (RL) and genetic algorithms (GAs)-are integrated to make decisions when the robots cooperatively transport an object to a goal location while avoiding obstacles. A probabilistic arbitrator is used to determine the winning output between the RL and GA algorithms. In particular, a modified RL algorithm called the sequential Q-learning algorithm is developed to deal with the issues of behavior conflict that arise in multi-robot cooperative transportation tasks. The learning-based high-level coordination subsystem sends commands to the low-level control subsystem, which is implemented with a hybrid force/position control scheme. Simulation and experimental results are presented to demonstrate the effectiveness and adaptivity of the developed approach.