Accelerating neuroevolutionary methods using a Kalman filter
Proceedings of the 10th annual conference on Genetic and evolutionary computation
Evolution Strategies for Direct Policy Search
Proceedings of the 10th international conference on Parallel Problem Solving from Nature: PPSN X
Countering Poisonous Inputs with Memetic Neuroevolution
Proceedings of the 10th international conference on Parallel Problem Solving from Nature: PPSN X
Variable Metric Reinforcement Learning Methods Applied to the Noisy Mountain Car Problem
Recent Advances in Reinforcement Learning
Identifying poorly documented object oriented software components
International Journal of Hybrid Intelligent Systems
Hoeffding and Bernstein races for selecting policies in evolutionary direct policy search
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Evolving an autonomous agent for non-Markovian reinforcement learning
Proceedings of the 11th Annual conference on Genetic and evolutionary computation
Uncertainty handling CMA-ES for reinforcement learning
Proceedings of the 11th Annual conference on Genetic and evolutionary computation
Neuroevolution strategies for episodic reinforcement learning
Journal of Algorithms
Efficient neural network pruning during neuro-evolution
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Neuroevolution based on reusable and hierarchical modular representation
ICONIP'08 Proceedings of the 15th international conference on Advances in neuro-information processing - Volume Part I
Classification by evolutionary generalised radial basis functions
International Journal of Hybrid Intelligent Systems - Advances in Intelligent Agent Systems
DXNN: evolving complex organisms in complex environments using a novel tweann system
Proceedings of the 13th annual conference companion on Genetic and evolutionary computation
Safe exploration of state and action spaces in reinforcement learning
Journal of Artificial Intelligence Research
Preliminary results for neuroevolutionary optimization phase order generation for static compilation
Proceedings of the 11th Workshop on Optimizations for DSP and Embedded Systems
Hi-index | 0.00 |
In this article we describe EANT, Evolutionary Acquisition of Neural Topologies, a method that creates neural networks by evolutionary reinforcement learning. The structure of the networks is developed using mutation operators, starting from a minimal structure. Their parameters are optimised using CMA-ES, Covariance Matrix Adaptation Evolution Strategy, a derandomised variant of evolution strategies. EANT can create neural networks that are very specialised; they achieve a very good performance while being relatively small. This can be seen in experiments where our method competes with a different one, called NEAT, NeuroEvolution of Augmenting Topologies, to create networks that control a robot in a visual servoing scenario.