Recursive stochastic algorithms for global optimization in Rd
SIAM Journal on Control and Optimization
Genetic programming: on the programming of computers by means of natural selection
Genetic programming: on the programming of computers by means of natural selection
Advances in knowledge discovery and data mining
Advances in knowledge discovery and data mining
Genetic Algorithms in Search, Optimization and Machine Learning
Genetic Algorithms in Search, Optimization and Machine Learning
Global random optimization by simultaneous perturbation stochastic approximation
Proceedings of the 33nd conference on Winter simulation
On Clustering Validation Techniques
Journal of Intelligent Information Systems
Evolutionary Optimization Versus Particle Swarm Optimization: Philosophy and Performance Differences
EP '98 Proceedings of the 7th International Conference on Evolutionary Programming VII
An overview of evolutionary algorithms for parameter optimization
Evolutionary Computation
Fractional particle swarm optimization in multidimensional search space
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Hi-index | 0.00 |
Particle Swarm Optimization (PSO) is attracting an ever-growing attention and more than ever it has found many application areas for many challenging optimization problems. In this paper, we draw the focus on a major drawback of the PSO algorithm: the poor gbest update. This can be a severe problem, which causes premature convergence to local optima since gbest as the common term in the update equation of all particles, is the primary guide of the swarm. Therefore, we basically seek a solution for the social problem in PSO, i.e. "Who will guide the guide?" which resembles the rhetoric question posed by Plato in his famous work on government: "Who will guard the guards?" (Quis custodiet ipsos custodes?). Stochastic approximation (SA) is purposefully adapted into two approaches to guide (or drive) the gbest particle (with simultaneous perturbation) towards the right direction with the gradient estimate of the underlying surface (or function) whilst avoiding local traps due to its stochastic nature. We purposefully used simultaneous perturbation SA (SPSA) for its low cost and since SPSA is applied only to the gbest (not the entire swarm), both approaches have thus a negligible overhead cost over the entire PSO process. Yet we have shown over a wide range ofnon-linear functions that both approaches significantly improve the performance of PSO especially if the parameters of SPSA suits to the problem in hand.