Artificial intelligence: a modern approach
Artificial intelligence: a modern approach
Between MDPs and semi-MDPs: a framework for temporal abstraction in reinforcement learning
Artificial Intelligence
Synchronizing a database to improve freshness
SIGMOD '00 Proceedings of the 2000 ACM SIGMOD international conference on Management of data
Learning sequences of actions in collectives of autonomous agents
Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 1
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
An Algorithm for Distributed Reinforcement Learning in Cooperative Multi-Agent Systems
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
The Complexity of Decentralized Control of Markov Decision Processes
UAI '00 Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence
Coordinated Reinforcement Learning
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
Coordination in multiagent reinforcement learning: a Bayesian approach
AAMAS '03 Proceedings of the second international joint conference on Autonomous agents and multiagent systems
Layered learning in multiagent systems
Layered learning in multiagent systems
Multi-agent patrolling: an empirical analysis of alternative architectures
MABS'02 Proceedings of the 3rd international conference on Multi-agent-based simulation II
Cooperative Multi-Agent Learning: The State of the Art
Autonomous Agents and Multi-Agent Systems
The impact of adversarial knowledge on adversarial planning in perimeter patrol
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 1
Probabilistic Multiagent Patrolling
SBIA '08 Proceedings of the 19th Brazilian Symposium on Artificial Intelligence: Advances in Artificial Intelligence
Authority Sharing in a Swarm of UAVs: Simulation and Experiments with Operators
SIMPAR '08 Proceedings of the 1st International Conference on Simulation, Modeling, and Programming for Autonomous Robots
Proceedings of the 40th Conference on Winter Simulation
A Markov Model for Multiagent Patrolling in Continuous Time
ICONIP '09 Proceedings of the 16th International Conference on Neural Information Processing: Part II
The Open System Setting in Timed Multiagent Patrolling
WI-IAT '11 Proceedings of the 2011 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology - Volume 02
Negotiator agents for the patrolling task
IBERAMIA-SBIA'06 Proceedings of the 2nd international joint conference, and Proceedings of the 10th Ibero-American Conference on AI 18th Brazilian conference on Advances in Artificial Intelligence
A hybrid learning strategy for discovery of policies of action
IBERAMIA-SBIA'06 Proceedings of the 2nd international joint conference, and Proceedings of the 10th Ibero-American Conference on AI 18th Brazilian conference on Advances in Artificial Intelligence
Heuristics for determining a patrol path of an unmanned combat vehicle
Computers and Industrial Engineering
Multirobot behavior synchronization through direct neural network communication
ICIRA'12 Proceedings of the 5th international conference on Intelligent Robotics and Applications - Volume Part II
Multi-robot repeated area coverage
Autonomous Robots
Distributed multi-robot patrol: A scalable and fault-tolerant framework
Robotics and Autonomous Systems
Expert Systems with Applications: An International Journal
Hi-index | 0.00 |
Patrolling tasks can be encountered in a variety of real-world domains, ranging from computer network administration and surveillance to computer wargame simulations. It is a complex multi-agent task, which usually requires agents to coordinate their decision-making in order to achieve optimal performance of the group as a whole. In this paper, we show how the patrolling task can be modeled as a reinforcement learning (RL) problem, allowing continuous and automatic adaptation of the agentsý strategies to their environment. We demonstrate that an efficient cooperative behavior can be achieved by using RL methods, such as Q-Learning, to train individual agents. The proposed approach is totally distributed, which makes it computationally efficient. The empirical evaluation proves the effectiveness of our approach, as the results obtained are substantially better than the results available so far on this domain.