Distributed path planning for mobile robots using a swarm of interacting reinforcement learners

  • Authors:
  • Christopher M. Vigorito

  • Affiliations:
  • University of Massachusetts Amherst, Amherst, MA

  • Venue:
  • Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Path planning for mobile robots in stochastic, dynamic environments is a difficult problem and the subject of much research in the field of robotics. While many approaches to solving this problem put the computational burden of path planning on the robot, physical path planning methods place this burden on a set of sensor nodes distributed throughout the environment that can communicate information to each other about path costs. Previous approaches to physical path planning have looked at the performance of such networks in regular environments (e.g., office buildings) using highly structured, uniform deployments of networks (e.g., grids). Additionally, these networks do not make use of real experience obtained from the robots they assist in guiding. We extend previous work in this area by incorporating reinforcement learning techniques into these methods and show improved performance in simulated, rough terrain environments. We also show that these networks, which we term SWIRLs (Swarms of Interacting Reinforcement Learners), can perform well with deployment distributions that are not as highly structured as in previous approaches.