Guarding against premature convergence while accelerating evolutionary search
Proceedings of the 12th annual conference on Genetic and evolutionary computation
Proceedings of the 13th annual conference on Genetic and evolutionary computation
Proceedings of the 13th annual conference on Genetic and evolutionary computation
Neuronal assembly dynamics in supervised and unsupervised learning scenarios
Neural Computation
Hi-index | 0.00 |
In evolutionary algorithms, much time is spent evaluating inferior phenotypes that produce no offspring. A common heuristic to address this inefficiency is to stop evaluations early if they hold little promise of attaining high fitness. However, the form of this heuristic is typically dependent on the fitness function used, and there is a danger of prematurely stopping evaluation of a phenotype that may have recovered in the remainder of the evaluation period. Here a stopping method is introduced that gradually reduces fitness over the phenotype's evaluation, rather than accumulating fitness. This method is independent of the fitness function used, only stops those phenotypes that are guaranteed to become inferior to the current offspring-producing phenotypes, and realizes significant time savings across several evolutionary robotics tasks. It was found that for many tasks, time complexity was reduced from polynomial to sublinear time, and time savings increased with the number of training instances used to evaluate a phenotype as well as with task difficulty.