Task allocation learning in a multiagent environment: Application to the RoboCupRescue simulation

  • Authors:
  • Sébastien Paquet;Brahim Chaib-draa;Patrick Dallaire;Danny Bergeron

  • Affiliations:
  • (Correspd. E-mail: spaquetse@iro.umontreal.ca) Computer Science and Software Engineering Department, Laval University, Québec, PQ, Canada;Computer Science and Software Engineering Department, Laval University, Québec, PQ, Canada;Computer Science and Software Engineering Department, Laval University, Québec, PQ, Canada;Computer Science and Software Engineering Department, Laval University, Québec, PQ, Canada

  • Venue:
  • Multiagent and Grid Systems
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Coordinating agents in a complex environment is a hard problem, but it can become even harder when certain characteristics of the tasks, like the required number of agents, are unknown. In these settings, agents not only have to coordinate themselves on the different tasks, but they also have to learn how many agents are required for each task. To contribute to this problem, we present in this paper a selective perception reinforcement learning algorithm which enables agents to learn the required number of agents that should coordinate their efforts on a given task. Even though there are continuous variables in the task description, agents in our algorithm are able to learn their expected reward according to the task description and the number of agents. The results, obtained in the RoboCupRescue simulation environment, show an improvement in the agents overall performance.