Distributed relational temporal difference learning

  • Authors:
  • Qiangfeng Peter Lau;Mong Li Lee;Wynne Hsu

  • Affiliations:
  • National University of Singapore, Singapore, Singapore;National University of Singapore, Singapore, Singapore;National University of Singapore, Singapore, Singapore

  • Venue:
  • Proceedings of the 2013 international conference on Autonomous agents and multi-agent systems
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Relational representations have great potential for rapidly generalizing learned knowledge in large Markov decision processes such as multi-agent problems. In this work, we introduce relational temporal difference learning for the distributed case where the communication links among agents are dynamic. Thus no critical components of the system should reside in any one agent. Relational generalization among agents' learning is achieved through the use of partially bound relational features and a message passing scheme. We further describe how the proposed concepts can be applied to distributed reinforcement learning methods that use value functions. Experiments were conducted on soccer and real-time strategy game domains with dynamic communication. Results show that our methods improve goal achievement in online learning with a greatly decreased number of parameters to learn when compared with existing distributed learning methods.