A “Joint+Marginal” Approach to Parametric Polynomial Optimization

  • Authors:
  • Jean B. Lasserre

  • Affiliations:
  • lasserre@laas.fr

  • Venue:
  • SIAM Journal on Optimization
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Given a compact parameter set $\mathbf{Y}\subset\mathbb{R}^p$, we consider polynomial optimization problems $(\mathbf{P}_{\mathbf{y}}$) on $\mathbb{R}^n$ whose description depends on the parameter $\mathbf{y}\in\mathbf{Y}$. We assume that one can compute all moments of some probability measure $\varphi$ on $\mathbf{Y}$, absolutely continuous with respect to the Lebesgue measure (e.g., $\mathbf{Y}$ is a box or a simplex and $\varphi$ is uniformly distributed). We then provide a hierarchy of semidefinite relaxations whose associated sequence of optimal solutions converges to the moment vector of a probability measure that encodes all information about all global optimal solutions $\mathbf{x}^*(\mathbf{y})$ of $\mathbf{P}_{\mathbf{y}}$, as $\mathbf{y}\in\mathbf{Y}$. In particular, one may approximate as closely as desired any polynomial functional of the optimal solutions like, e.g., their $\varphi$-mean. In addition, using this knowledge on moments, the measurable function $\mathbf{y}\mapsto x^*_k(\mathbf{y})$ of the $k$th coordinate of optimal solutions, can be estimated, e.g., by maximum entropy methods. Also, for a boolean variable $x_k$, one may approximate as closely as desired its persistency $\varphi(\{\mathbf{y}:x^*_k(\mathbf{y})=1\}$, i.e., the probability that in an optimal solution $\mathbf{x}^*(\mathbf{y})$, the coordinate $x^*_k(\mathbf{y})$ takes the value 1. Last but not least, from an optimal solution of the dual semidefinite relaxations, one provides a sequence of polynomial (resp., piecewise polynomial) lower approximations with $L_1(\varphi)$ (resp., $\varphi$-almost uniform) convergence to the optimal value function.