On Accelerated Random Search

  • Authors:
  • M. J. Appel;R. LaBarre;D. Radulovic

  • Affiliations:
  • -;-;-

  • Venue:
  • SIAM Journal on Optimization
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

A new variant of pure random search (PRS) for function optimization is introduced. The basic finite-descent accelerated random search (ARS) algorithm is simple: the search is confined to shrinking neighborhoods of a previous record-generating value, with the search neighborhood reinitialized to the entire space when a new record is found. Local maxima are avoided by including an automatic restart feature which reinitializes the search neighborhood after some number of shrink steps have been performed.One goal of this article is to provide rigorous mathematical comparisons of ARS to PRS. It is shown that the sequence produced by the ARS process converges, with probability one, to the maximum of a continuous objective function faster than that of the PRS process by adjustably large multiples of the time step (Theorem). Regarding an infinite-descent (no automatic restart) version of ARS, it is shown that if the objective function satisfies a local nonflatness condition, then the right tails of the distributions of inter-record times are exponentially smaller than those of PRS (Theorem).Performance comparisons between ARS, PRS, and three quasi-Newton-type optimization routines are reported in attempting to find extrema of (i) each of a small collection of standard test functions of two variables, and (ii) d-dimensional polynomials with random roots. Also reported is a three-way performance comparison between ARS, PRS, and a simulated annealing algorithm in attempting to solve traveling salesman problems.