An introduction to genetic algorithms
An introduction to genetic algorithms
Genetic Algorithms: Principles and Perspectives: A Guide to GA Theory
Genetic Algorithms: Principles and Perspectives: A Guide to GA Theory
Stochastic Local Search: Foundations & Applications
Stochastic Local Search: Foundations & Applications
Building Blocks, Cohort Genetic Algorithms, and Hyperplane-Defined Functions
Evolutionary Computation
Proceedings of the 10th annual conference on Genetic and evolutionary computation
Message-passing and local heuristics as decimation strategies for satisfiability
Proceedings of the 2009 ACM symposium on Applied Computing
Generative fixation: a unified explanation for the adaptive capacity of simple recombinative genetic algorithms
Sufficient conditions for coarse-graining evolutionary dynamics
FOGA'07 Proceedings of the 9th international conference on Foundations of genetic algorithms
A New Method of Image Compression Based on Quantum Neural Network
ISME '10 Proceedings of the 2010 International Conference of Information Science and Management Engineering - Volume 01
Combining pixelization and dimensional stacking
ISVC'06 Proceedings of the Second international conference on Advances in Visual Computing - Volume Part II
Hi-index | 0.00 |
Hyperclimbing is an intuitive, general-purpose, global optimization heuristic applicable to discrete product spaces with rugged or stochastic cost functions. The strength of this heuristic lies in its insusceptibility to local optima when the cost function is deterministic, and its tolerance for noise when the cost function is stochastic. Hyperclimbing works by decimating a search space, i.e., by iteratively fixing the values of small numbers of variables. The hyperclimbing hypothesis posits that genetic algorithms with uniform crossover (UGAs) perform optimization by implementing efficient hyperclimbing. Proof of concept for the hyperclimbing hypothesis comes from the use of an analytic technique that exploits algorithmic symmetry. By way of validation, we present experimental results showing that a simple tweak inspired by the hyperclimbing hypothesis dramatically improves the performance of a UGA on large, random instances of MAX-3SAT and the Sherrington Kirkpatrick Spin Glasses problem. An exciting corollary of the hyperclimbing hypothesis is that a form of implicit parallelism more powerful than the kind described by Holland underlies optimization in UGAs. The implications of the hyperclimbing hypothesis for Evolutionary Computation and Artificial Intelligence are discussed.