Evolving neural networks through augmenting topologies
Evolutionary Computation
A Taxonomy for artificial embryogeny
Artificial Life
Balancing accuracy and parsimony in genetic programming
Evolutionary Computation
Evolving neural networks in compressed weight space
Proceedings of the 12th annual conference on Genetic and evolutionary computation
Evolving a single scalable controller for an octopus arm with a variable number of segments
PPSN'10 Proceedings of the 11th international conference on Parallel problem solving from nature: Part II
Neuroevolution with analog genetic encoding
PPSN'06 Proceedings of the 9th international conference on Parallel Problem Solving from Nature
Scalable neuroevolution for reinforcement learning
TPNC'12 Proceedings of the First international conference on Theory and Practice of Natural Computing
Generalized compressed network search
PPSN'12 Proceedings of the 12th international conference on Parallel Problem Solving from Nature - Volume Part I
Evolving large-scale neural networks for vision-based reinforcement learning
Proceedings of the 15th annual conference on Genetic and evolutionary computation
Hi-index | 0.00 |
Indirect encoding schemes for neural network phenotypes can represent large networks compactly. In previous work, we presented a new approach where networks are encoded indirectly as a set of Fourier-type coefficients that decorrelate weight matrices such that they can often be represented by a small number of genes, effectively reducing the search space dimensionality, and speed up search. Up to now, the complexity of networks using this encoding was fixed a priori, both in terms of (1) the number of free parameters (topology) and (2) the number of coefficients. In this paper, we introduce a method, called Compressed Network Complexity Search (CNCS), for automatically determining network complexity that favors parsimonious solutions. CNCS maintains a probability distribution over complexity classes that it uses to select which class to optimize. Class probabilities are adapted based on their expected fitness. Starting with a prior biased toward the simplest networks, the distribution grows gradually until a solution is found. Experiments on two benchmark control problems, including a challenging non-linear version of the helicopter hovering task, demonstrate that the method consistently finds simple solutions.