Convex separable optimization is not much harder than linear optimization

  • Authors:
  • D. S. Hochbaum;J. George Shanthikumar

  • Affiliations:
  • Univ. of California, Berkeley;Univ. of California, Berkeley

  • Venue:
  • Journal of the ACM (JACM)
  • Year:
  • 1990

Quantified Score

Hi-index 0.01

Visualization

Abstract

The polynomiality of nonlinear separable convex (concave) optimization problems, on linear constraints with a matrix with “small” subdeterminants, and the polynomiality of such integer problems, provided the inteter linear version of such problems ins polynomial, is proven. This paper presents a general-purpose algorithm for converting procedures that solves linear programming problems. The conversion is polynomial for constraint matrices with polynomially bounded subdeterminants. Among the important corollaries of the algorithm is the extension of the polynomial solvability of integer linear programming problems with totally unimodular constraint matrix, to integer-separable convex programming. An algorithm for finding a &egr;-accurate optimal continuous solution to the nonlinear problem that is polynomial in log(1/&egr;) and the input size and the largest subdeterminant of the constraint matrix is also presented. These developments are based on proximity results between the continuous and integral optimal solutions for problems with any nonlinear separable convex objective function. The practical feature of our algorithm is that is does not demand an explicit representation of the nonlinear function, only a polynomial number of function evaluations on a prespecified grid.