Evolutionary algorithms in theory and practice: evolution strategies, evolutionary programming, genetic algorithms
Use of a self-adaptive penalty approach for engineering optimization problems
Computers in Industry
Evolutionary Computation: Towards a New Philosophy of Machine Intelligence
Evolutionary Computation: Towards a New Philosophy of Machine Intelligence
Using Genetic Algorithms in Engineering Design Optimization with Non-Linear Constraints
Proceedings of the 5th International Conference on Genetic Algorithms
Proceedings of the 5th International Conference on Genetic Algorithms
Estimating the spectral sensitivity of a digital sensor using calibration targets
Proceedings of the 9th annual conference on Genetic and evolutionary computation
An island model for high-dimensional genomes using phylogenetic speciation and species barcoding
Proceedings of the 11th Annual conference on Genetic and evolutionary computation
A methodology to find clusters in the data based on Shannon's entropy and genetic algorithms
ACELAE'11 Proceedings of the 10th WSEAS international conference on communications, electrical & computer engineering, and 9th WSEAS international conference on Applied electromagnetics, wireless and optical communications
IBERAMIA-SBIA'06 Proceedings of the 2nd international joint conference, and Proceedings of the 10th Ibero-American Conference on AI 18th Brazilian conference on Advances in Artificial Intelligence
MICAI'12 Proceedings of the 11th Mexican international conference on Advances in Computational Intelligence - Volume Part II
Hi-index | 0.00 |
Genetic algorithms (GAs) have been successfully applied to numerical optimization problems. Since GAs are usually designed for unconstrained optimization, they have to be adapted to tackle the constrained cases, i.e. those in which not all representable solutions are valid. In this work we experimentally compare 5 ways to attain such adaptation. Our analysis relies on the usual method of selecting an arbitrary suite of test functions (25 of these) albeit applying a methodology which allows us to determine which method is better within statistical certainty limits. In order to do this we have selected 5 penalty function strategies; for each of these we have further selected 3 particular GAs. The behavior of each strategy and the associated GAs is then established by extensively sampling the function suite and finding the worst case best values from Chebyshev's theorem. We have found some counterintuitive results which we discuss and try to explain.