Genetic algorithms with sharing for multimodal function optimization
Proceedings of the Second International Conference on Genetic Algorithms on Genetic algorithms and their application
Genetic algorithms + data structures = evolution programs (3rd ed.)
Genetic algorithms + data structures = evolution programs (3rd ed.)
Niching methods for genetic algorithms
Niching methods for genetic algorithms
An introduction to differential evolution
New ideas in optimization
Journal of Global Optimization
A species conserving genetic algorithm for multimodal function optimization
Evolutionary Computation
Finding Multimodal Solutions Using Restricted Tournament Selection
Proceedings of the 6th International Conference on Genetic Algorithms
An analysis of the behavior of a class of genetic adaptive systems.
An analysis of the behavior of a class of genetic adaptive systems.
A multimodal particle swarm optimizer based on fitness Euclidean-distance ratio
Proceedings of the 9th annual conference on Genetic and evolutionary computation
Niche radius adaptation in the CMA-ES niching algorithm
PPSN'06 Proceedings of the 9th international conference on Parallel Problem Solving from Nature
Hi-index | 0.00 |
In recent decades, solving multi-modal optimization problem has attracted many researchers attention in evolutionary computation community. Multi-modal optimization refers to locating not only one optimum but also the entire set of optima in the search space. To locate multiple optima in parallel, many niching techniques are proposed and incorporated into evolutionary algorithms in literature. In this paper, a local search technique is proposed and integrated with the existing Fitness Euclidean-distance Ratio PSO (FER-PSO) to enhance its fine search ability or the ability to identify multiple optima. The algorithm is tested on 8 commonly used benchmark functions and compared with the original FER-PSO as well as a number of multi-modal optimization algorithms in literature. The experimental results suggest that the proposed technique not only increases the probability of finding both global and local optima but also speeds up the searching process to reduce the average number of function evaluations.