Analysis and development of stopping criteria for stochastic global optimization algorithms

  • Authors:
  • Charoenchai Khompatraporn;Zelda B. Zabinsky

  • Affiliations:
  • University of Washington;University of Washington

  • Venue:
  • Analysis and development of stopping criteria for stochastic global optimization algorithms
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

Stochastic global optimization algorithms have been used to solve problems in a myriad of applications, including engineering, economic, medical, and military applications. While some stochastic algorithms are developed based on theoretical vigorousness, most are simply heuristic. Assessing these algorithms can undoubtedly be a challenge. Moreover, many studies report that after a certain number of function evaluations stochastic global optimization algorithms sometimes stall in terms of the improvement of the objective function values. A common remedy to this situation in practice is to restart the algorithm. This dissertation presents a theoretical analysis that establishes the stopping and restarting strategy for these algorithms. In this study, we first found a methodology to fairly compare various types of stochastic global optimization algorithms. A suggestion is to use the number of function evaluations as a comparison basis when the mechanisms of the algorithms, platform, and other coding environment are greatly diverse, and then use a comparison methodology to succinctly summarize the experimental results. Next we establish a strategy to stop and restart stochastic global optimization algorithms by introducing sampling cost functions. The strategy is theoretically derived based on two algorithms called Pure Adaptive Search and Pure Random Search. The combined algorithm is termed Multistart Pure Adaptive Search, and several properties are derived to provide the theoretical basis for the stopping/restarting strategy. Since Multistart Pure Adaptive Search is an ideal algorithm, it is approximated by an implementable algorithm called Improving Hit-and-Run with multiple restarts, which we term Multistart Improving Hit-and-Run (MIHR). We extend MIHR to Dynamic Multistart Improving Hit-and-Run and use the latter algorithm in a numerical experiment. The quality of the solution found by Dynamic Multistart Improving Hit-and-Run under the stopping/restarting strategy established previously is addressed through the probability that the best objective function value acquired is within ε of the global objective function value. A class of Lipschitz functions is used to demonstrate the usefulness of the stopping/restarting strategy. Finally, the numerical experiment is conducted to illustrate the effectiveness of the stopping/restarting strategy.