Cooperative multi-robot reinforcement learning: a framework in hybrid state space

  • Authors:
  • Xueqing Sun;Tao Mao;Jerald D. Kralik;Laura E. Ray

  • Affiliations:
  • Thayer School of Engineering, Dartmouth College, Hanover, NH;Thayer School of Engineering, Dartmouth College, Hanover, NH;Department of Psychological & Brain Sciences, Dartmouth College, Hanover, NH;Thayer School of Engineering, Dartmouth College, Hanover, NH

  • Venue:
  • IROS'09 Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

In the area of autonomous multi-robot cooperation, much emphasis has been placed on how to coordinate individual robot behaviors in order to achieve an optimal solution to task completion as a team. This paper presents an approach to cooperative multi-robot reinforcement learning based on a hybrid state space representation of the environment to achieve both task learning and heterogeneous role emergence in a unified framework. The methodology also involves learning space reduction through a neural perception module and a progressive rescheduling algorithm that interleaves online execution and relearning to adapt to environmental uncertainties and enhance performance. The approach aims to reduce combinatorial complexity inherent in role-task optimization, and achieves a satisficing solution to complex team-based tasks, rather than a globally optimal solution. Empirical evaluation of the proposed framework is conducted through simulation of a foraging task.