A new gradient based particle swarm optimization algorithm for accurate computation of global minimum

  • Authors:
  • Mathew M. Noel

  • Affiliations:
  • School of Electrical Engineering, VIT University, 2/36 4A East Cross Road, Gandhinagar, Vellore, Tamilnadu 632006, India

  • Venue:
  • Applied Soft Computing
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Stochastic optimization algorithms like genetic algorithms (GAs) and particle swarm optimization (PSO) algorithms perform global optimization but waste computational effort by doing a random search. On the other hand deterministic algorithms like gradient descent converge rapidly but may get stuck in local minima of multimodal functions. Thus, an approach that combines the strengths of stochastic and deterministic optimization schemes but avoids their weaknesses is of interest. This paper presents a new hybrid optimization algorithm that combines the PSO algorithm and gradient-based local search algorithms to achieve faster convergence and better accuracy of final solution without getting trapped in local minima. In the new gradient-based PSO algorithm, referred to as the GPSO algorithm, the PSO algorithm is used for global exploration and a gradient based scheme is used for accurate local exploration. The global minimum is located by a process of finding progressively better local minima. The GPSO algorithm avoids the use of inertial weights and constriction coefficients which can cause the PSO algorithm to converge to a local minimum if improperly chosen. The De Jong test suite of benchmark optimization problems was used to test the new algorithm and facilitate comparison with the classical PSO algorithm. The GPSO algorithm is compared to four different refinements of the PSO algorithm from the literature and shown to converge faster to a significantly more accurate final solution for a variety of benchmark test functions.