Continual learning in reinforcement environments
Continual learning in reinforcement environments
Efficient reinforcement learning through symbiotic evolution
Machine Learning - Special issue on reinforcement learning
Incremental evolution of complex general behavior
Adaptive Behavior - Special issue on environment structure and behavior
Symbiotic evolution of neural networks in sequential decision tasks
Symbiotic evolution of neural networks in sequential decision tasks
Evolving Neural Control Systems
IEEE Expert: Intelligent Systems and Their Applications
Temporal credit assignment in reinforcement learning
Temporal credit assignment in reinforcement learning
Supervised and Evolutionary Learning of Echo State Networks
Proceedings of the 10th international conference on Parallel Problem Solving from Nature: PPSN X
Evolution Strategies for Direct Policy Search
Proceedings of the 10th international conference on Parallel Problem Solving from Nature: PPSN X
Anticipatory Behavior in Adaptive Learning Systems
Neuroevolution strategies for episodic reinforcement learning
Journal of Algorithms
Evolving content in the galactic arms race video game
CIG'09 Proceedings of the 5th international conference on Computational Intelligence and Games
Assembling strategies in extrinsic evolvable hardware with bidirectional incremental evolution
EuroGP'03 Proceedings of the 6th European conference on Genetic programming
MBEANN: mutation-based evolving artificial neural networks
ECAL'07 Proceedings of the 9th European conference on Advances in artificial life
Scaffolding for interactively evolving novel drum tracks for existing songs
Evo'08 Proceedings of the 2008 conference on Applications of evolutionary computing
Autonomous Agents and Multi-Agent Systems
Genetic representation and evolvability of modular neural controllers
IEEE Computational Intelligence Magazine
Evolving Static Representations for Task Transfer
The Journal of Machine Learning Research
Evolving plastic neural networks with novelty search
Adaptive Behavior - Animals, Animats, Software Agents, Robots, Adaptive Systems
A Modified Memory-Based Reinforcement Learning Method for Solving POMDP Problems
Neural Processing Letters
Adaptive navigation for autonomous robots
Robotics and Autonomous Systems
Sequential constant size compressors for reinforcement learning
AGI'11 Proceedings of the 4th international conference on Artificial general intelligence
Neuroevolution with analog genetic encoding
PPSN'06 Proceedings of the 9th international conference on Parallel Problem Solving from Nature
Comparative reproduction schemes for evolving gathering collectives
ECAL'05 Proceedings of the 8th European conference on Advances in Artificial Life
Evolving the walking behaviour of a 12 DOF quadruped using a distributed neural architecture
BioADIT'06 Proceedings of the Second international conference on Biologically Inspired Approaches to Advanced Information Technology
Pruning neural networks for a two-link robot control system
IWANN'05 Proceedings of the 8th international conference on Artificial Neural Networks: computational Intelligence and Bioinspired Systems
Highly modular architecture for the general control of autonomous robots
IWANN'05 Proceedings of the 8th international conference on Artificial Neural Networks: computational Intelligence and Bioinspired Systems
Sample aware embedded feature selection for reinforcement learning
Proceedings of the 14th annual conference on Genetic and evolutionary computation
Rolling horizon evolution versus tree search for navigation in single-player real-time games
Proceedings of the 15th annual conference on Genetic and evolutionary computation
Proceedings of the 15th annual conference on Genetic and evolutionary computation
Artificial Intelligence in Medicine
Hi-index | 0.00 |
The success of evolutionary methods on standard control learning tasks has created a need for new benchmarks. The classic pole balancing problem is no longer difficult enough to serve as a viable yardstick for measuring the learning efficiency of these systems. The double pole case, where two poles connected to the cart must be balanced simultaneously is much more difficult, especially when velocity information is not available. In this article, we demonstrate a neuroevolution system, Enforced Sub-populations (ESP), that is used to evolve a controller for the standard double pole task and a much harder, non-Markovian version. In both cases, our results show that ESP is faster than other neuroevolution methods. In addition, we introduce an incremental method that evolves on a sequence of tasks, and utilizes a local search technique (Delta-Coding) to sustain diversity. This method enables the system to solve even more difficult versions of the task where direct evolution cannot.