An extension of a hierarchical reinforcement learning algorithm for multiagent settings

  • Authors:
  • Ioannis Lambrou;Vassilis Vassiliades;Chris Christodoulou

  • Affiliations:
  • Department of Computer Science, University of Cyprus, Nicosia, Cyprus;Department of Computer Science, University of Cyprus, Nicosia, Cyprus;Department of Computer Science, University of Cyprus, Nicosia, Cyprus

  • Venue:
  • EWRL'11 Proceedings of the 9th European conference on Recent Advances in Reinforcement Learning
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper compares and investigates single-agent reinforcement learning (RL) algorithms on the simple and an extended taxi problem domain, and multiagent RL algorithms on a multiagent extension of the simple taxi problem domain we created. In particular, we extend the Policy Hill Climbing (PHC) and the Win or Learn Fast-PHC (WoLF-PHC) algorithms by combining them with the MAXQ hierarchical decomposition and investigate their efficiency. The results are very promising for the multiagent domain as they indicate that these two newly-created algorithms are the most efficient ones from the algorithms we compared.