ACM Transactions on Mathematical Software (TOMS)
Computers and Operations Research - Special issue: artificial intelligence, evolutionary programming and operations research
Genetic algorithms + data structures = evolution programs (3rd ed.)
Genetic algorithms + data structures = evolution programs (3rd ed.)
An introduction to genetic algorithms
An introduction to genetic algorithms
Testing Unconstrained Optimization Software
ACM Transactions on Mathematical Software (TOMS)
Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control and Artificial Intelligence
Genetic Algorithms in Search, Optimization and Machine Learning
Genetic Algorithms in Search, Optimization and Machine Learning
A New Approach on the Traveling Salesman Problem by Genetic Algorithms
Proceedings of the 5th International Conference on Genetic Algorithms
Lamarckian Evolution, The Baldwin Effect and Function Optimization
PPSN III Proceedings of the International Conference on Evolutionary Computation. The Third Conference on Parallel Problem Solving from Nature: Parallel Problem Solving from Nature
Adding learning to the cellular development of neural networks: Evolution and the baldwin effect
Evolutionary Computation
Hybrid methods using genetic algorithms for global optimization
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Reinforced Genetic Programming
Genetic Programming and Evolvable Machines
Efficiency speed-up strategies for evolutionary computation: fundamentals and fast-GAs
Applied Mathematics and Computation
Development and the Baldwin effect
Artificial Life
Real-coded memetic algorithms with crossover hill-climbing
Evolutionary Computation - Special issue on magnetic algorithms
Tantrix: A Minute to Learn, 100 (Genetic Algorithm) Generations to Master
Genetic Programming and Evolvable Machines
Computers and Electronics in Agriculture
Adaptive cellular memetic algorithms
Evolutionary Computation
An unorthodox introduction to Memetic Algorithms
ACM SIGEVOlution
Fusion of soft computing and hard computing for large-scale plants: a general model
Applied Soft Computing
Memetic algorithms for continuous optimisation based on local search chains
Evolutionary Computation
Expert Systems with Applications: An International Journal
Sensitivity versus accuracy in multiclass problems using memetic Pareto evolutionary neural networks
IEEE Transactions on Neural Networks
A memetic approach to golomb rulers
PPSN'06 Proceedings of the 9th international conference on Parallel Problem Solving from Nature
FSKD'05 Proceedings of the Second international conference on Fuzzy Systems and Knowledge Discovery - Volume Part II
Comparison between lamarckian and baldwinian repair on multiobjective 0/1 knapsack problems
EMO'05 Proceedings of the Third international conference on Evolutionary Multi-Criterion Optimization
Memetic algorithms to product-unit neural networks for regression
IWANN'05 Proceedings of the 8th international conference on Artificial Neural Networks: computational Intelligence and Bioinspired Systems
User-Centric optimization with evolutionary and memetic systems
LSSC'11 Proceedings of the 8th international conference on Large-Scale Scientific Computing
An intelligent multi-restart memetic algorithm for box constrained global optimisation
Evolutionary Computation
Hi-index | 0.00 |
Genetic algorithms (GAs) are very efficient at exploring the entire search space; however, they are relatively poor at finding the precise local optimal solution in the region in which the algorithm converges. Hybrid GAs are the combination of improvement procedures, which are good at finding local optima, and GAs. There are two basic strategies for using hybrid GAs. In the first, Lamarckian learning, the genetic representation is updated to match the solution found by the improvement procedure. In the second, Baldwinian learning, improvement procedures are used to change the fitness landscape, but the solution that is found is not encoded back into the genetic string. This paper examines the issue of using partial Lamarckianism (i.e., the updating of the genetic representation for only a percentage of the individuals), as compared to pure Lamarckian and pure Baldwinian learning in hybrid GAs. Multiple instances of five bounded nonlinear problems, the location-allocation problem, and the cell formation problem were used as test problems in an empirical investigation. Neither a pure Lamarckian nor a pure Baldwinian search strategy was found to consistently lead to quicker convergence of the GA to the best known solution for the series of test problems. Based on a minimax criterion (i.e., minimizing the worst case performance across all test problem instances), the 20% and 40% partial Lamarckianism search strategies yielded the best mixture of solution quality and computational efficiency.