Global optimization
Applying evolutionary programming to selected traveling salesman problems
Cybernetics and Systems
Evolving artificial intelligence
Evolving artificial intelligence
Evolutionary computation: toward a new philosophy of machine intelligence
Evolutionary computation: toward a new philosophy of machine intelligence
Evolution and Optimum Seeking: The Sixth Generation
Evolution and Optimum Seeking: The Sixth Generation
System Identification through Simulated Evolution: A Machine Learning Approach to Modeling
System Identification through Simulated Evolution: A Machine Learning Approach to Modeling
Handbook of Evolutionary Computation
Handbook of Evolutionary Computation
Evolutionary Optimization Versus Particle Swarm Optimization: Philosophy and Performance Differences
EP '98 Proceedings of the 7th International Conference on Evolutionary Programming VII
An overview of evolutionary algorithms for parameter optimization
Evolutionary Computation
A modified strategy for the constriction factor in particle swarm optimization
ACAL'07 Proceedings of the 3rd Australian conference on Progress in artificial life
Self-adaptive differential evolution
CIS'05 Proceedings of the 2005 international conference on Computational Intelligence and Security - Volume Part I
No free lunch theorems for optimization
IEEE Transactions on Evolutionary Computation
Combining mutation operators in evolutionary programming
IEEE Transactions on Evolutionary Computation
Evolutionary programming made faster
IEEE Transactions on Evolutionary Computation
The particle swarm - explosion, stability, and convergence in amultidimensional complex space
IEEE Transactions on Evolutionary Computation
Hi-index | 0.00 |
The search direction and the search step size are two important factors which affect the performance of algorithms. In this paper, we combine Particle Swarm Optimization (PSO) with EP to form two new algorithms namely PSOEP and SAVPSO. The basic idea is to introduce the search direction to the mutation operator of EP and use lognormal self-adaptive strategy to control the velocity of PSO to guide the individual at a faster convergence rate. All of these algorithms are compared to each other with respect to the similarities and differences of their basic components, as well as their performances on seven benchmark problems. Our experimental results show that PSOEP performs much better than all other version of EPs, and SAVPSO performs much better than PSO for the benchmark functions.