Swarm intelligence: from natural to artificial systems
Swarm intelligence: from natural to artificial systems
Whistling in the dark: cooperative trail following in uncertain localization space
AGENTS '00 Proceedings of the fourth international conference on Autonomous agents
Ant-like missionaries and cannibals: synthetic pheromones for distributed motion control
AGENTS '00 Proceedings of the fourth international conference on Autonomous agents
Ant colony system: a cooperative learning approach to the traveling salesman problem
IEEE Transactions on Evolutionary Computation
Ant system: optimization by a colony of cooperating agents
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
An Analysis of the Pheromone Q-Learning Algorithm
IBERAMIA 2002 Proceedings of the 8th Ibero-American Conference on AI: Advances in Artificial Intelligence
A Pheromone-Based Utility Model for Collaborative Foraging
AAMAS '04 Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems - Volume 1
Cooperative Multi-Agent Learning: The State of the Art
Autonomous Agents and Multi-Agent Systems
Review: A review of ant algorithms
Expert Systems with Applications: An International Journal
An efficient hybrid approach based on PSO, ACO and k-means for cluster analysis
Applied Soft Computing
Collaborative foraging using beacons
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 3 - Volume 3
An overview of cooperative and competitive multiagent learning
LAMAS'05 Proceedings of the First international conference on Learning and Adaption in Multi-Agent Systems
Hi-index | 0.00 |
Biological systems have often provided inspiration for the design of artificial systems. On such example of a natural system that has inspired researchers is the ant colony. In this paper an algorithm for multi-agent reinforcement learning, a modified Q-learning, is proposed. The algorithm is inspired by the natural behaviour of ants, which deposit pheromones in the environment to communicate. The benefit besides simulating ant behaviour in a colony is to design complex multi-agent systems. Complex behaviour can emerge from relatively simple interacting agents. The proposed Q-learning update equation includes a belief factor. The belief factor reflects the confidence the agent has in the pheromone detected in its environment. Agents communicate implicitly to co-operate in learning to solve a path-planning problem. The results indicate that combining synthetic pheromone with standard Q-learning speeds up the learning process. It will be shown that the agents can be biased towards a preferred solution by adjusting the pheromone deposit and evaporation rates.