Exploration strategies for learning in multi-agent foraging

  • Authors:
  • Yogeswaran Mohan;Ponnambalam S.G.

  • Affiliations:
  • School of Engineering, Monash University, Petaling Jaya, Selangor, Malaysia;School of Engineering, Monash University, Petaling Jaya, Selangor, Malaysia

  • Venue:
  • SEMCCO'11 Proceedings of the Second international conference on Swarm, Evolutionary, and Memetic Computing - Volume Part II
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

During the learning process, every agent's action affects the interaction with the environment based on the agent's current knowledge and future knowledge. The agent must therefore have to choose between exploiting its current knowledge or exploring other alternatives to improve its knowledge for better decisions in the future. This paper presents critical analysis on a number of exploration strategies reported in the open literatures. Exploration strategies namely random search, greedy, ε-greedy, Boltzmann Distribution (BD), Simulated Annealing (SA), Probability Matching (PM) and Optimistic Initial Values (OIV) are implemented to study on their performances on a multi-agent foraging task modeled.