Parallel genetic algorithms for a hypercube
Proceedings of the Second International Conference on Genetic Algorithms on Genetic algorithms and their application
ASPARAGOS An Asynchronous Parallel Genetic Optimization Strategy
Proceedings of the 3rd International Conference on Genetic Algorithms
Parallel Genetic Algorithms Population Genetics and Combinatorial Optimization
Proceedings of the 3rd International Conference on Genetic Algorithms
Fine-Grained Parallel Genetic Algorithms
Proceedings of the 3rd International Conference on Genetic Algorithms
An Efficient Migration Scheme for Subpopulation-Based Asynchronously Parallel Genetic Algorithms
Proceedings of the 5th International Conference on Genetic Algorithms
On Solving Travelling Salesman Problems by Genetic Algorithms
PPSN I Proceedings of the 1st Workshop on Parallel Problem Solving from Nature
Computer simulations of genetic adaptation: parallel subcomponent interaction in a multilocus model
Computer simulations of genetic adaptation: parallel subcomponent interaction in a multilocus model
Two analysis tools to describe the operation of classifier systems
Two analysis tools to describe the operation of classifier systems
Toward a theory of evolution strategies: Some asymptotical results from the (1,+ λ)-theory
Evolutionary Computation
Two-Step Incremental Evolution of a Prosthetic Hand Controller Based on Digital Logic Gates
ICES '01 Proceedings of the 4th International Conference on Evolvable Systems: From Biology to Hardware
A scalable parallel genetic algorithm for x-ray spectroscopic analysis
GECCO '05 Proceedings of the 7th annual conference on Genetic and evolutionary computation
Aspects of adaptation in natural and artificial evolution
Proceedings of the 9th annual conference companion on Genetic and evolutionary computation
Analog circuits optimization based on evolutionary computation techniques
Integration, the VLSI Journal
The benefit of migration in parallel evolutionary algorithms
Proceedings of the 12th annual conference on Genetic and evolutionary computation
A divide and conquer method for learning large Fuzzy Cognitive Maps
Fuzzy Sets and Systems
Experimental supplements to the theoretical analysis of migration in the Island model
PPSN'10 Proceedings of the 11th international conference on Parallel problem solving from nature: Part I
A hierarchical distributed evolutionary algorithm to TSP
ISICA'10 Proceedings of the 5th international conference on Advances in computation and intelligence
A service oriented architecture for decision making in engineering design
EGC'05 Proceedings of the 2005 European conference on Advances in Grid Computing
Multi-objective evolutionary design of robust controllers on the grid
Engineering Applications of Artificial Intelligence
Hi-index | 0.00 |
This paper examines the scalability of several types of parallel genetic algorithms (GAs). The objective is to determine the optimal number of processors that can be used by each type to minimize the execution time. The first part of the paper considers algorithms with a single population. The investigation focuses on an implementation where the population is distributed to several processors, but the results are applicable to more common masterslave implementations, where the population is entirely stored in a master processor and multiple slaves are used to evaluate the fitness. The second part of the paper deals with parallel GAs with multiple populations. It first considers a bounding case where the connectivity, the migration rate, and the frequency of migrations are set to their maximal values. Then, arbitrary regular topologies with lower migration rates are considered and the frequency of migrations is set to its lowest value. The investigationis mainly theoretical, but experimental evidence with an additively-decomposable function is included to illustrate the accuracy of the theory. In all cases, the calculations show that the optimal number of processors that minimizes the execution time is directly proportional to the square root of the population size and the fitness evaluation time. Since these two factors usually increase as the domain becomes more difficult, the results of the paper suggest that parallel GAs can integrate large numbers of processors and significantly reduce the execution time of many practical applications.