Solving nonlinear constrained optimization problems through constraint partitioning

  • Authors:
  • Benjamin W. Wah;Yixin Chen

  • Affiliations:
  • University of Illinois at Urbana-Champaign;University of Illinois at Urbana-Champaign

  • Venue:
  • Solving nonlinear constrained optimization problems through constraint partitioning
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this dissertation, we propose a general approach that can significantly reduce the complexity in solving constrained nonlinear optimization (NLP) problems. A key observation we have made is that most NLPs have structured arrangements of constraints. We have developed techniques to exploit these constraint structures by partitioning the constraints into subproblems related by global constraints. Constraint partitioning leads to much relaxed subproblems that are significantly easier to solve. We have proposed the theory of extended saddle points (ESP) to support constraint partitioning. ESP offers a necessary and sufficient condition for constrained local optima of NLPs. It facilitates constraint partitioning by providing a set of necessary conditions, one for each subproblem, to characterize the local optima. It further reduces the complexity by defining a much smaller search space in each subproblem for backtracking. Since resolving the global constraints only incurs a small amount of overhead, our approach leads to a significant reduction of complexity. Our partition-and-resolve approach has achieved substantial improvements over existing methods in AI planning and mathematical programming. We have applied our method to solve some large-scale AI planning problems, as well as some continuous and mixed-integer NLPs in standard benchmarks. We have solved some large-scale problems that were not solvable by other leading methods and have improved the solution duality on many problems.