Technical Note: \cal Q-Learning
Machine Learning
Multi-agent reinforcement learning: independent vs. cooperative agents
Readings in agents
Distributed reinforcement learning for a traffic engineering application
AGENTS '00 Proceedings of the fourth international conference on Autonomous agents
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Multi-Agent Reinforcement Leraning for Traffic Light Control
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Design of a Traffic Junction Controller Using Classifier Systems and Fuzzy Logic
Proceedings of the 6th International Conference on Computational Intelligence, Theory and Applications: Fuzzy Days
Cooperative Multiagent Systems for the Optimization of Urban Traffic
IAT '04 Proceedings of the IEEE/WIC/ACM International Conference on Intelligent Agent Technology
A Distributed Approach for Coordination of Traffic Signal Agents
Autonomous Agents and Multi-Agent Systems
Cooperative Multi-Agent Learning: The State of the Art
Autonomous Agents and Multi-Agent Systems
Requirements for an ubiquitous computing simulation and emulation environment
InterSense '06 Proceedings of the first international conference on Integrated internet ad hoc and sensor networks
Building autonomic systems using collaborative reinforcement learning
The Knowledge Engineering Review
ECML'05 Proceedings of the 16th European conference on Machine Learning
Optimal signal control using adaptive dynamic programming
ICCSA'05 Proceedings of the 2005 international conference on Computational Science and Its Applications - Volume Part IV
Multiagent traffic management: opportunities for multiagent learning
LAMAS'05 Proceedings of the First international conference on Learning and Adaption in Multi-Agent Systems
ATC '09 Proceedings of the 6th International Conference on Autonomic and Trusted Computing
Self-organized traffic control
Proceedings of the seventh ACM international workshop on VehiculAr InterNETworking
Multi-policy optimization in self-organizing systems
SOAR'09 Proceedings of the First international conference on Self-organizing architectures
Autonomic multi-policy optimization in pervasive systems: Overview and evaluation
ACM Transactions on Autonomous and Adaptive Systems (TAAS) - Special section on formal methods in pervasive computing, pervasive adaptation, and self-adaptive systems: Models and algorithms
Safe exploration of state and action spaces in reinforcement learning
Journal of Artificial Intelligence Research
Holonic multi-agent system for traffic signals control
Engineering Applications of Artificial Intelligence
Engineering Applications of Artificial Intelligence
Hi-index | 0.00 |
The high growth rate of vehicles per capita now poses a real challenge to efficient Urban Traffic Control (UTC). An efficient solution to UTC must be adaptive in order to deal with the highly-dynamic nature of urban traffic. In the near future, global positioning systems and vehicle-to-vehicle/infrastructure communication may provide a more detailed local view of the traffic situation that could be employed for better global UTC optimization. In this paper we describe the design of a next-generation UTC system that exploits such local knowledge about a junction's traffic in order to optimize traffic control. Global UTC optimization is achieved using a local Adaptive Round Robin (ARR) phase switching model optimized using Collaborative Reinforcement Learning (CRL). The design employs an ARR-CRL-based agent controller for each signalized junction that collaborates with neighbouring agents in order to learn appropriate phase timing based on the traffic pattern. We compare our approach to non-adaptive fixed-time UTC system and to a saturation balancing algorithm in a large-scale simulation of traffic in Dublin's inner city centre. We show that the ARR-CRL approach can provide significant improvement resulting in up to ~57% lower average waiting time per vehicle compared to the saturation balancing algorithm.