Interaction of culture-based learning and cooperative co-evolution and its application to automatic behavior-based system design

  • Authors:
  • Amir-Massoud Farahmand;Majid Nili Ahmadabadi;Caro Lucas;Babak N. Araabi

  • Affiliations:
  • Control and Int. Processing Center for Excellence, Dept. of Electrical and Comp. Eng., Univ. of Tehran, Tehran, Iran and Sch. of Cognitive Sci., Inst. for Res. in Fundamental Sci. and Dept. of Com ...;Control and Intelligent Processing Center for Excellence, Department of Electrical and Computer Engineering, University of Tehran, Tehran, Iran and School of Cognitive Sciences, Institute for Rese ...;Control and Intelligent Processing Center for Excellence, Department of Electrical and Computer Engineering, University of Tehran, Tehran, Iran and School of Cognitive Sciences, Institute for Rese ...;Control and Intelligent Processing Center for Excellence, Department of Electrical and Computer Engineering, University of Tehran, Tehran, Iran and School of Cognitive Sciences, Institute for Rese ...

  • Venue:
  • IEEE Transactions on Evolutionary Computation
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Designing an intelligent situated agent is a difficult task because the designer must see the problem from the agent's viewpoint, considering all its sensors, actuators, and computation systems. In this paper, we introduce a bio-inspired hybridization of reinforcement learning, cooperative co-evolution, and a cultural-inspired memetic algorithm for the automatic development of behavior-based agents. Reinforcement learning is responsible for the individual-level adaptation. Cooperative co-evolution performs at the population level and provides basic decision-making modules for the reinforcement-learning procedure. The culture-based memetic algorithm, which is a new computational interpretation of the meme metaphor, increases the lifetime performance of agents by sharing learning experiences between all agents in the society. In this paper, the design problem is decomposed into two different parts: 1) developing a repertoire of behavior modules and 2) organizing them in the agent's architecture. Our proposed cooperative co-evolutionary approach solves the first problem by evolving behavior modules in their separate genetic pools. We address the problem of relating the fitness of the agent to the fitness of behavior modules by proposing two fitness sharing mechanisms, namely uniform and value-based fitness sharing mechanisms. The organization of behavior modules in the architecture is determined by our structure learning method. A mathematical formulation is provided that shows how to decompose the value of the structure into simpler components. These values are estimated during learning and are used to find the organization of behavior modules during the agent's lifetime. To accelerate the learning process, we introduce a culturebased method based on our new interpretation of the meme metaphor. Our proposed memetic algorithm is a mechanism for sharing learned structures among agents in the society. Lifetime performance of the agent, which is quite important for realworld applications, increases considerably when the memetic algorithm is in action. Finally, we apply our methods to two benchmark problems: an abstract problem and a decentralized multirobot object-lifting task, and we achieve human-competitive architecture designs.