Coordination languages and their significance
Communications of the ACM
A performance comparison of multi-hop wireless ad hoc network routing protocols
MobiCom '98 Proceedings of the 4th annual ACM/IEEE international conference on Mobile computing and networking
Elevator Group Control Using Multiple Reinforcement Learning Agents
Machine Learning
Swarm intelligence: from natural to artificial systems
Swarm intelligence: from natural to artificial systems
The ant colony optimization meta-heuristic
New ideas in optimization
Distributed reinforcement learning for a traffic engineering application
AGENTS '00 Proceedings of the fourth international conference on Autonomous agents
Capacity of Ad Hoc wireless networks
Proceedings of the 7th annual international conference on Mobile computing and networking
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Self-Organization in Biological Systems
Self-Organization in Biological Systems
ICML '99 Proceedings of the Sixteenth International Conference on Machine Learning
TPOT-RL Applied to Network Routing
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
A New Distributed Reinforcement Learning Algorithm for Multiple Objective Optimization Problems
IBERAMIA-SBIA '00 Proceedings of the International Joint Conference, 7th Ibero-American Conference on AI: Advances in Artificial Intelligence
Coordinated Reinforcement Learning
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
A Distributed Reinforcement Learning Scheme for Network Routing
A Distributed Reinforcement Learning Scheme for Network Routing
Utilising the Event Calculus for Policy Driven Adaptation on Mobile Systems
POLICY '02 Proceedings of the 3rd International Workshop on Policies for Distributed Systems and Networks (POLICY'02)
Extending the Representational State Transfer (REST) Architectural Style for Decentralized Systems
Proceedings of the 26th International Conference on Software Engineering
Toward Domain-Independent Formalization of Indirect Interaction
WETICE '04 Proceedings of the 13th IEEE International Workshops on Enabling Technologies: Infrastructure for Collaborative Enterprises
Evolutionary swarm traffic: if ant roads had traffic lights
CEC '02 Proceedings of the Evolutionary Computation on 2002. CEC '02. Proceedings of the 2002 Congress - Volume 02
Reinforcement learning: a survey
Journal of Artificial Intelligence Research
AntNet: distributed stigmergetic control for communications networks
Journal of Artificial Intelligence Research
Toward self-organizing, self-repairing and resilient distributed systems
Future directions in distributed computing
Cooperative negotiation in autonomic systems using incremental utility elicitation
UAI'03 Proceedings of the Nineteenth conference on Uncertainty in Artificial Intelligence
Composing simulation architectures for autonomic systems
The Knowledge Engineering Review
Research Issues in Multiple Policy Optimization Using Collaborative Reinforcement Learning
SEAMS '07 Proceedings of the 2007 International Workshop on Software Engineering for Adaptive and Self-Managing Systems
A framework for incremental construction of real global smart space applications
Pervasive and Mobile Computing
A spatial programming model for real global smart space applications
DAIS'06 Proceedings of the 6th IFIP WG 6.1 international conference on Distributed Applications and Interoperable Systems
Hi-index | 0.00 |
This paper describes the application of a decentralised coordination algorithm, called Collaborative Reinforcement Learning (CRL), to two different distributed system problems. CRL enables the establishment of consensus between independent agents to support the optimisation of system-wide properties in distributed systems where there is no support for global state. Consensus between interacting agents on local environmental or system properties is established through localised advertisement of policy information by agents and the use of advertisements by agents to update their local, partial view of the system. As CRL assumes homogeneity in advertisement evaluation by agents, advertisements that improve the system optimisation problem tend to be propagated quickly through the system, enabling the system to collectively adapt its behaviour to a changing environment. In this paper, we describe the application of CRL to two different distributed system problems, a routing protocol for ad-hoc networks called SAMPLE and a next generation urban traffic control system called UTC-CRL. We evaluate CRL experimentally in SAMPLE by comparing its system routing performance in the presence of changing environmental conditions, such as congestion and link unreliability, with existing ad-hoc routing protocols. Through SAMPLE's ability to establish consensus between routing agents on stable routes, even in the presence of changing levels of congestion in a network, it demonstrates improved performance and self-management properties. In applying CRL to the UTC scenario, we hope to validate experimentally the appropriateness of CRL to another system optimisation problem.