Generalized annealing algorithms for discrete optimization problems

  • Authors:
  • Sanphet Sukhapesna;Dushyant Sharma

  • Affiliations:
  • University of Michigan;University of Michigan

  • Venue:
  • Generalized annealing algorithms for discrete optimization problems
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Despite the success of simulated annealing to find near-optimal solutions of intractable discrete optimization problems, there have been attempts to enhance the algorithm by modifying transition probabilities. However, the asymptotic behavior of modified algorithms is very difficult to analyze, i.e., well-known convergence results of simulated annealing usually become inapplicable to other annealing algorithms. In this research, we consider a generalized annealing algorithm, the acceptance probability of which depends on the cost of the current solution, the cost of the solution candidate, and the control parameter. We present convergence results that relax sufficient conditions provided by most general convergence theorems in the literature. We study the asymptotic behavior of the stationary distribution corresponding to the generalized annealing algorithm. Using a novel approach that relies on the Markov Chain Tree Theorem, we provide a convergence theorem for the stationary distribution. By exploiting the same insights, we analyze the limiting stationary distribution of a generalized annealing algorithm that uses local information other than the cost of the current solution to guide the search. The class of convergent transition probabilities can be further extended by employing an adaptive cooling schedule. By so doing, we can relax the condition that the transition probabilities asymptotically behave like an exponential function. Based on an adaptive cooling schedule scheme, we present conditions that guarantee the convergence to optimal solutions. We develop a novel stochastic search algorithm called functional annealing that combines the attractive features of both simulated annealing and a local search technique. The algorithm can thereby benefit from exploiting a very large-scale neighborhood structure. Furthermore, we incorporate functional annealing with statistical learning techniques to enhance the finite-time performance of the algorithm. Our computational results on the Quadratic Assignment Problem reveal that functional annealing algorithms outperform many existing ones. We also present theoretical and computation results from the use of functional annealing for solving a discrete stochastic optimization problem.