Parallel and distributed computation: numerical methods
Parallel and distributed computation: numerical methods
Dual coordinate ascent methods for non-strictly convex minimization
Mathematical Programming: Series A and B
Parallel Gradient Distribution in Unconstrained Optimization
SIAM Journal on Control and Optimization
New Inexact Parallel Variable Distribution Algorithms
Computational Optimization and Applications
Error bounds in mathematical programming
Mathematical Programming: Series A and B - Special issue: papers from ismp97, the 16th international symposium on mathematical programming, Lausanne EPFL
On the Convergence of Constrained Parallel Variable Distribution Algorithms
SIAM Journal on Optimization
Parallel Variable Transformation in Unconstrained Optimization
SIAM Journal on Optimization
Hi-index | 0.00 |
In the parallel variable distribution (PVD) approach for solving optimization problems, the variables are distributed among parallel processors with each processor having the primary responsibility for updating its block of variables while allowing the remaining "secondary" variables to change in a restricted fashion along some easily computable directions. For constrained nonlinear programs, convergence in [4] was established in the special case of convex block-separable constraints. In [11], the PVD approach was extended to problems with general convex constraints by means of utilizing the projected gradient directions for the change of secondary variables. In this paper, we propose two new variants of PVD for the constrained case. For the case of block-separable constraints, we develop a parallel sequential quadratic programming algorithm. This is the first PVD-type method which does not assume convexity of the feasible set for convergence. For inseparable convex constraints, we propose a PVD method based on suitable approximate projected gradient directions. Using such approximate directions is especially important when the projection operation is computationally expensive.