Retaining the lessons from past for better performance in a dynamic multiple task environment

  • Authors:
  • Hasan Mujtaba;A. Rauf Baig

  • Affiliations:
  • CS Department, National University of Computer & Emerging Sciences, Islamabad, Pakistan;CS Department, National University of Computer & Emerging Sciences, Islamabad, Pakistan

  • Venue:
  • CEC'09 Proceedings of the Eleventh conference on Congress on Evolutionary Computation
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Human beings learn to do a task and then go on to learn some other task. However, they do not forget the previous learning. If need arises, they can call upon their previous learning and do not have to relearn from scratch again. In this paper, we build upon our earlier work in which we presented a mechanism for learning multiple tasks in a dynamic environment where the tasks can change arbitrarily without any warning to the learning agents. The main feature of the mechanism is that a percentage of the learning agents is periodically made to reset its previous learning and restart learning again. Thus, there is always a sub-population which can learn the new task, whenever there is a task change, without being hampered by previous learning. The learning then spreads to the other members of the population also. In our current work we experiment with the incorporation of archive for preserving those strategies which have performed well. The strategies in the archive are tested time to time in the current environment. If the current task is the same as the task for which the strategy was first discovered, then that strategy rapidly comes in vogue for the whole population. The criteria by which strategies are selected for storage in the archive, the deletion of some strategies because the archive has limited space and the mechanism for selecting strategies for utilization in the current environment are presented.