A novel generative encoding for exploiting neural network sensor and output geometry
Proceedings of the 9th annual conference on Genetic and evolutionary computation
Generating large-scale neural networks through discovering geometric regularities
Proceedings of the 9th annual conference on Genetic and evolutionary computation
Accelerated Neural Evolution through Cooperatively Coevolved Synapses
The Journal of Machine Learning Research
Evolving neural networks in compressed weight space
Proceedings of the 12th annual conference on Genetic and evolutionary computation
Compressed network complexity search
PPSN'12 Proceedings of the 12th international conference on Parallel Problem Solving from Nature - Volume Part I
Generalized compressed network search
PPSN'12 Proceedings of the 12th international conference on Parallel Problem Solving from Nature - Volume Part I
Hi-index | 0.00 |
The idea of using evolutionary computation to train artificial neural networks, or neuroevolution (NE), for reinforcement learning (RL) tasks has now been around for over 20 years. However, as RL tasks become more challenging, the networks required become larger, as do their genomes. But, scaling NE to large nets (i.e. tens of thousands of weights) is infeasible using direct encodings that map genes one-to-one to network components. In this paper, we scale-up our compressed network encoding where network weight matrices are represented indirectly as a set of Fourier-type coefficients, to tasks that require very-large networks due to the high-dimensionality of their input space. The approach is demonstrated successfully on two reinforcement learning tasks in which the control networks receive visual input: (1) a vision-based version of the octopus control task requiring networks with over 3 thousand weights, and (2) a version of the TORCS driving game where networks with over 1 million weights are evolved to drive a car around a track using video images from the driver's perspective.