Swarm intelligence: from natural to artificial systems
Swarm intelligence: from natural to artificial systems
Whistling in the dark: cooperative trail following in uncertain localization space
AGENTS '00 Proceedings of the fourth international conference on Autonomous agents
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Neuro-Dynamic Programming
Phe-Q: A Pheromone Based Q-Learning
AI '01 Proceedings of the 14th Australian Joint Conference on Artificial Intelligence: Advances in Artificial Intelligence
On the convergence of stochastic iterative dynamic programming algorithms
Neural Computation
Ant colony system: a cooperative learning approach to the traveling salesman problem
IEEE Transactions on Evolutionary Computation
Ant system: optimization by a colony of cooperating agents
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Cooperative Multi-Agent Learning: The State of the Art
Autonomous Agents and Multi-Agent Systems
Hi-index | 0.00 |
The Phe-Q machine learning technique, a modified Q-learning technique, was developed to enable co-operating agents to communicate in learning to solve a problem. The Phe-Q learning technique combines Q-learning with synthetic pheromone to improve on the speed of convergence. The Phe-Q update equation includes a belief factor that reflects the confidence the agent has in the pheromone (the communication) deposited in the environment by other agents. With the Phe-Q update equation, speed of convergence towards an optimal solution depends on a number parameters including the number of agents solving a problem, the amount of pheromone deposited, and the evaporation rate. In this paper, work carried out to optimise speed of learning with the Phe-Q technique is described. The objective was to to optimise Phe-Q learning with respect to pheromone deposition rates, evaporation rates.