Reinforcement learning with replacing eligibility traces
Machine Learning - Special issue on reinforcement learning
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Metalearning and neuromodulation
Neural Networks - Computational models of neuromodulation
Policy Invariance Under Reward Transformations: Theory and Application to Reward Shaping
ICML '99 Proceedings of the Sixteenth International Conference on Machine Learning
Adaptive Behavior - Animals, Animats, Software Agents, Robots, Adaptive Systems
Evolution, learning, and instinct: 100 years of the baldwin effect
Evolutionary Computation
The influence of learning on evolution: A mathematical framework
Artificial Life
Darwinian embodied evolution of the learning ability for survival
Adaptive Behavior - Animals, Animats, Software Agents, Robots, Adaptive Systems
Creating Brain-Like Intelligence
Creating Brain-Like Intelligence
Emergence of Different Mating Strategies in Artificial Embodied Evolution
ICONIP '09 Proceedings of the 16th International Conference on Neural Information Processing: Part II
Hi-index | 0.00 |
Embodied evolution is a methodology for evolutionary robotics that mimics the distributed, asynchronous, and autonomous properties of biological evolution. The evaluation, selection, and reproduction are carried out by cooperation and competition of the robots, without any need for human intervention. An embodied evolution framework is therefore well suited to study the adaptive learning mechanisms for artificial agents that share the same fundamental constraints as biological agents: self-preservation and self-reproduction. In this paper we propose a framework for performing embodied evolution with a limited number of robots, by utilizing time-sharing in subpopulations of virtual agents. Within this framework, we explore the combination of within-generation learning of basic survival behaviors by reinforcement learning, and evolutionary adaptations over the generations of the basic behavior selection policy, the reward functions, and meta-parameters for reinforcement learning. We apply a biologically inspired selection scheme, in which there is no explicit communication of the individuals' fitness information. The individuals can only reproduce offspring by mating, a pair-wise exchange of genotypes, and the probability that an individual reproduces offspring in its own subpopulation is dependent on the individual's "health", i.e., energy level, at the mating occasion. We validate the proposed method by comparing the proposed method with evolution using standard centralized selection, in simulation, and by transferring the obtained solutions to hardware using two real robots.