Convergence and Error Bound for Perturbation of Linear Programs

  • Authors:
  • Paul Tseng

  • Affiliations:
  • Department of Mathematics, University of Washington, Seattle, WA 98195. tseng@math.washington.edu

  • Venue:
  • Computational Optimization and Applications - Special issue on computational optimization—a tribute to Olvi Mangasarian, part II
  • Year:
  • 1999

Quantified Score

Hi-index 0.00

Visualization

Abstract

In various penalty/smoothing approaches to solving alinear program, one regularizes the problem by adding to the linearcost function a separable nonlinear function multiplied by a smallpositive parameter. Popular choices of this nonlinear functioninclude the quadratic function, the logarithm function, and thex ln(x)-entropy function. Furthermore, the solutions generated bysuch approaches may satisfy the linear constraints only inexactly andthus are optimal solutions of the regularized problem with aperturbed right-hand side. We give a general condition for such anoptimal solution to converge to an optimal solution of the originalproblem as the perturbation parameter tends to zero. In the casewhere the nonlinear function is strictly convex, we further derive alocal (error) bound on the distance from such an optimal solution tothe limiting optimal solution of the original problem, expressed interms of the perturbation parameter.