Evolving dynamical neural networks for adaptive behavior
Adaptive Behavior
Memoryless policies: theoretical limitations and practical results
SAB94 Proceedings of the third international conference on Simulation of adaptive behavior : from animals to animats 3: from animals to animats 3
Learning and evolution in neural networks
Adaptive Behavior
Efficient reinforcement learning through symbiotic evolution
Machine Learning - Special issue on reinforcement learning
Neural Computation
Robust non-linear control through neuroevolution
Robust non-linear control through neuroevolution
Cooperative Coevolution: An Architecture for Evolving Coadapted Subcomponents
Evolutionary Computation
Genomic computing networks learn complex POMDPs
Proceedings of the 8th annual conference on Genetic and evolutionary computation
Comparing evolutionary and temporal difference methods in a reinforcement learning domain
Proceedings of the 8th annual conference on Genetic and evolutionary computation
Training Recurrent Networks by Evolino
Neural Computation
Empirical Studies in Action Selection with Reinforcement Learning
Adaptive Behavior - Animals, Animats, Software Agents, Robots, Adaptive Systems
ECML '07 Proceedings of the 18th European conference on Machine Learning
Anticipatory Behavior in Adaptive Learning Systems
Temporal difference and policy search methods for reinforcement learning: an empirical comparison
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 2
Knowledge-based recurrent neural networks in Reinforcement Learning
ASC '07 Proceedings of The Eleventh IASTED International Conference on Artificial Intelligence and Soft Computing
Speeding up reinforcement learning using recurrent neural networks in non-Markovian environments
ASC '07 Proceedings of The Eleventh IASTED International Conference on Artificial Intelligence and Soft Computing
Multi groups cooperation based symbiotic evolution for TSK-type neuro-fuzzy systems design
Expert Systems with Applications: An International Journal
Developmental neural heterogeneity through coarse-coding regulation
ECAL'07 Proceedings of the 9th European conference on Advances in artificial life
Autonomous Agents and Multi-Agent Systems
Journal of Intelligent & Fuzzy Systems: Applications in Engineering and Technology
Hi-index | 0.00 |
Recurrent neural networks are theoretically capable of learning complex temporal sequences, but training them through gradient-descent is too slow and unstable for practical use in reinforcement learning environments. Neuroevolution, the evolution of artificial neural networks using genetic algorithms, can potentially solve real-world reinforcement learning tasks that require deep use of memory, i.e. memory spanning hundreds or thousands of inputs, by searching the space of recurrent neural networks directly. In this paper, we introduce a new neuroevolution algorithm called Hierarchical Enforced SubPopulations that simultaneously evolves networks at two levels of granularity: full networks and network components or neurons. We demonstrate the method in two POMDP tasks that involve temporal dependencies of up to thousands of time-steps, and show that it is faster and simpler than the current best conventional reinforcement learning system on these tasks.