Generative communication in Linda
ACM Transactions on Programming Languages and Systems (TOPLAS)
Communications of the ACM
RoboCup: The Robot World Cup Initiative
AGENTS '97 Proceedings of the first international conference on Autonomous agents
Multi-agent reinforcement learning: independent vs. cooperative agents
Readings in agents
The dynamics of reinforcement learning in cooperative multiagent systems
AAAI '98/IAAI '98 Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence
Multiagent systems: a modern approach to distributed artificial intelligence
Multiagent systems: a modern approach to distributed artificial intelligence
Artificial Intelligence - Special issue on Robocop: the first step
Planning, learning and coordination in multiagent decision processes
TARK '96 Proceedings of the 6th conference on Theoretical aspects of rationality and knowledge
Using the Simulated Annealing Algorithm for Multiagent Decision Making
RoboCup 2006: Robot Soccer World Cup X
A Concise Introduction to Multiagent Systems and Distributed Artificial Intelligence
A Concise Introduction to Multiagent Systems and Distributed Artificial Intelligence
Hi-index | 0.00 |
This paper is aimed to describe a general improvement over the previous work on the cooperative multiagent coordination. The focus is on highly dynamic environments where the message transfer delay is not negligible. Therefore, the agents shall not count on communicating their intentions along the time they are making the decisions, because this will directly add the communication latencies to the decision making phase. The only way for the agents to be in touch is to communicate and share their beliefs, asynchronously with the decision making procedure. Consequently, they can share similar knowledge and make coordinated decisions based on it. However, in a very dynamic environment, the shared knowledge may not remain similar due to the communication limitations and latencies. This may lead to some inconsistencies in the team coordination performance. Addressing this issue, we propose to hold another abstraction of the environment, called Virtual World Model (VWM), for each agent in addition to its primary internal world state. The primary world state is updated as soon as a new piece of information is received while the information affects the VWM through a synchronization mechanism. The proposed idea has been implemented and tested for Iran University of Science and Technology (IUST) RoboCupRescue simulation team, the 3rdwinner of the 2006 worldcup competitions.