Undesired state-action prediction in multi-agent reinforcement learning for linked multi-component robotic system control

  • Authors:
  • Borja Fernandez-Gauna;Ion Marques;Manuel GrañA

  • Affiliations:
  • Computational Intelligence Group, UPV/EHU, Dept. CCIA, Spain;Computational Intelligence Group, UPV/EHU, Dept. CCIA, Spain;Computational Intelligence Group, UPV/EHU, Dept. CCIA, Spain

  • Venue:
  • Information Sciences: an International Journal
  • Year:
  • 2013

Quantified Score

Hi-index 0.07

Visualization

Abstract

The paper deals with the problem of learning the control of Multi-Component Robotic Systems (MCRSs) applying Multi-Agent Reinforcement Learning (MARL) algorithms. Modeling Linked MCRS usually leads to over-constrained environments, posing great difficulties for efficient learning with conventional single and multi-agent reinforcement algorithms. In this paper, we propose a hybrid learning algorithm composed of a modified Q-Learning algorithm embedding an Undesired State-Action Prediction (USAP) module trained by a supervised learning approach which learns a model predicting undesired transitions to states breaking physical constraints. The USAP module's output is used by the Q-Learning algorithm to prevent these undesired transitions, therefore boosting learning efficiency. This hybrid approach is extended to the multi-agent case embedding the USAP module in Distributed Round-Robin Q-Learning (D-RR-QL), which requires very little communications among agents. We present results of computational experiments conducted in the classical multi-agent taxi scheduling task and a hose transportation task. Results show a considerable learning gain both in time and accuracy, compared to the state-of-the-art Distributed Q-Learning approach in the deterministic taxi scheduling task. In the hose transportation task, USAP module introduces a significant improvement in learning convergence speed.