Multi-agent Multi-objective Learning Using Heuristically Accelerated Reinforcement Learning

  • Authors:
  • Leonardo A. Ferreira;Reinaldo A. C. Bianchi;Carlos H. C. Ribeiro

  • Affiliations:
  • -;-;-

  • Venue:
  • SBR-LARS '12 Proceedings of the 2012 Brazilian Robotics Symposium and Latin American Robotics Symposium
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper introduces two new algorithms aimed at solving multi-agent multi-objective reinforcement learning problems in which the learning agent must not only interact with multiples agents but also consider various objectives (or criteria) in order to solve the problem. The main concept behind the proposed algorithms is a modular approach that is used to divide the multiple objectives in modules, and making each one of these modules learn a different objective with different Action-Value and reinforcement functions. Besides the decomposition of objectives, both algorithms use a heuristic function to accelerate the learning process. The first algorithm learns one objective at a time, iterating along the objectives, while the second proposed algorithm also divides the problem in sub-problems but learns every objective simultaneously. The Predator-Prey problem was chosen to compare the performance of both proposed solutions with well known algorithms. In this problem, the learning agent plays the role of the prey and must learn to find food in a fixed position of a grid world while being pursued by the predator. The considered objectives are finding food and avoiding the predator. As the results shows, decomposing a multi-objective problem in sub-problems and using heuristics makes the learning process faster and easier to implement. We notice that the first algorithm introduced in this paper learns faster, but it is more difficult to implement in a real world environment.