Distributed W-Learning: Multi-Policy Optimization in Self-Organizing Systems

  • Authors:
  • Ivana Dusparic;Vinny Cahill

  • Affiliations:
  • -;-

  • Venue:
  • SASO '09 Proceedings of the 2009 Third IEEE International Conference on Self-Adaptive and Self-Organizing Systems
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Large-scale agent-based systems are required to self-optimize towards multiple, potentially conflicting, policies of varying spatial and temporal scope. As a result, not all agents may be implementing all policies at all times, resulting in agent heterogeneity. As agents share their operating environment, significant dependencies can arise between agents and therefore between policy implementations. To address self-optimization in the presence of agent heterogeneity, policy dependency and the lack of global knowledge that is inherent in large-scale decentralized environments, we propose Distributed W-Learning (DWL). DWL is a reinforcement learning (RL)-based algorithm for collaborative agent-based self-optimization towards multiple policies, which relies only on local interactions and learning. We have evaluated the DWL algorithm in a simulation of a self-organizing urban traffic control (UTC) system and show that using DWL can improve the performance of multiple policies deployed simultaneously, even over corresponding single-policy deployments. For example, in UTC, optimizing simultaneously for cars and public transport vehicles reduces the waiting times of cars to 78% of their waiting times in the best-performing single-policy deployment that optimizes for cars only, while also outperforming the widely-deployed round-robin and saturation balancing traffic controllers that we used as baselines.