Genetic algorithms with sharing for multimodal function optimization
Proceedings of the Second International Conference on Genetic Algorithms on Genetic algorithms and their application
Genetic algorithms + data structures = evolution programs (3rd ed.)
Genetic algorithms + data structures = evolution programs (3rd ed.)
Niching methods for genetic algorithms
Niching methods for genetic algorithms
A species conserving genetic algorithm for multimodal function optimization
Evolutionary Computation
Finding Multimodal Solutions Using Restricted Tournament Selection
Proceedings of the 6th International Conference on Genetic Algorithms
An analysis of the behavior of a class of genetic adaptive systems.
An analysis of the behavior of a class of genetic adaptive systems.
Efficient differential evolution using speciation for multimodal function optimization
GECCO '05 Proceedings of the 7th annual conference on Genetic and evolutionary computation
A multimodal particle swarm optimizer based on fitness Euclidean-distance ratio
Proceedings of the 9th annual conference on Genetic and evolutionary computation
Adaptive niche radii and niche shapes approaches for niching with the cma-es
Evolutionary Computation
Optimization using particle swarms with near neighbor interactions
GECCO'03 Proceedings of the 2003 international conference on Genetic and evolutionary computation: PartI
Ensemble of niching algorithms
Information Sciences: an International Journal
Niching without niching parameters: particle swarm optimization using a ring topology
IEEE Transactions on Evolutionary Computation
Information Sciences: an International Journal
Multimodal optimization by means of a topological species conservation algorithm
IEEE Transactions on Evolutionary Computation
A Differential Covariance Matrix Adaptation Evolutionary Algorithm for real parameter optimization
Information Sciences: an International Journal
The particle swarm - explosion, stability, and convergence in amultidimensional complex space
IEEE Transactions on Evolutionary Computation
The fully informed particle swarm: simpler, maybe better
IEEE Transactions on Evolutionary Computation
Comprehensive learning particle swarm optimizer for global optimization of multimodal functions
IEEE Transactions on Evolutionary Computation
Locating and tracking multiple dynamic optima by a particle swarm model using speciation
IEEE Transactions on Evolutionary Computation
A new evolutionary search strategy for global optimization of high-dimensional problems
Information Sciences: an International Journal
A note on teaching-learning-based optimization algorithm
Information Sciences: an International Journal
Variations of biogeography-based optimization and Markov analysis
Information Sciences: an International Journal
Black hole: A new heuristic optimization approach for data clustering
Information Sciences: an International Journal
Credit portfolio management using two-level particle swarm optimization
Information Sciences: an International Journal
Integrating the artificial bee colony and bees algorithm to face constrained optimization problems
Information Sciences: an International Journal
Hi-index | 0.07 |
Multimodal optimization is still one of the most challenging tasks for evolutionary computation. In recent years, many evolutionary multi-modal optimization algorithms have been developed. All these algorithms must tackle two issues in order to successfully solve a multi-modal problem: how to identify multiple global/local optima and how to maintain the identified optima till the end of the search. For most of the multi-modal optimization algorithms, the fine-local search capabilities are not effective. If the required accuracy is high, these algorithms fail to find the desired optima even after converging near them. To overcome this problem, this paper integrates a novel local search technique with some existing PSO based multimodal optimization algorithms to enhance their local search ability. The algorithms are tested on 14 commonly used multi-modal optimization problems and the experimental results suggest that the proposed technique not only increases the probability of finding both global and local optima but also reduces the average number of function evaluations.