Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Evolving neural networks through augmenting topologies
Evolutionary Computation
Modular Interdependency in Complex Dynamical Systems
Artificial Life
Pareto-coevolutionary genetic programming for problem decomposition in multi-class classification
Proceedings of the 9th annual conference on Genetic and evolutionary computation
New methods for competitive coevolution
Evolutionary Computation
Reinforcement learning: a survey
Journal of Artificial Intelligence Research
Variable resolution discretization for high-accuracy solutions of optimal control problems
IJCAI'99 Proceedings of the 16th international joint conference on Artificial intelligence - Volume 2
Symbiosis, complexification and simplicity under GP
Proceedings of the 12th annual conference on Genetic and evolutionary computation
Empowerment for continuous agent-environment systems
Adaptive Behavior - Animals, Animats, Software Agents, Robots, Adaptive Systems
A comparison of linear genetic programming and neural networks inmedical data mining
IEEE Transactions on Evolutionary Computation
Intrinsic Motivation Systems for Autonomous Mental Development
IEEE Transactions on Evolutionary Computation
Open-ended behavioral complexity for evolved virtual creatures
Proceedings of the 15th annual conference on Genetic and evolutionary computation
Hi-index | 0.00 |
Adopting a symbiotic model of evolution separates context for deploying an action from the action itself. Such a separation provides a mechanism for task decomposition in temporal sequence learning. Moreover, previously learned policies are taken to be synonymous with meta actions (actions that are themselves policies). Should solutions to the task not be forthcoming in an initial round of evolution, then solutions from the earlier round represent the 'meta' actions for a new round of evolution. This provides the basis for evolving policy trees. A benchmarking study is performed using the Acrobot handstand task. Solutions to date from reinforcement learning have not been able to approach the performance of those established 14 years ago using an A* search and a priori knowledge regarding the Acrobot energy equations. The proposed symbiotic approach is able to match and, for the first time, better these results. Moreover, unlike previous work, solutions are tested under a broad range of Acrobot initial conditions, with hierarchical solutions providing significantly better generalization performance.