Relaxation methods for network flow problems with convex arc costs
SIAM Journal on Control and Optimization
On large scale nonlinear network optimization
Mathematical Programming: Series A and B
Some numerical experiments with variable-storage quasi-Newton algorithms
Mathematical Programming: Series A and B
On the limited memory BFGS method for large scale optimization
Mathematical Programming: Series A and B
Network flows: theory, algorithms, and applications
Network flows: theory, algorithms, and applications
Application of the dual active set algorithm to quadratic network optimization
Computational Optimization and Applications
Parallel alternating direction multiplier decomposition of convex programs
Journal of Optimization Theory and Applications
A limited memory algorithm for bound constrained optimization
SIAM Journal on Scientific Computing
A Partitioned ε-Relaxation Algorithm for Separable Convex
Computational Optimization and Applications - Special issue on computational optimization—a tribute to Olvi Mangasarian, part I
A Primal-Dual Algorithm for MonotropicProgramming and its Application to NetworkOptimization
Computational Optimization and Applications
Newton's Method for Large Bound-Constrained Optimization Problems
SIAM Journal on Optimization
An $\epsilon$-Relaxation Method for Separable Convex Cost Network Flow Problems
SIAM Journal on Optimization
Applying a Newton Method to Strictly Convex Separable Network Quadratic Programs
SIAM Journal on Optimization
Journal of Optimization Theory and Applications
A Survey of Algorithms for Convex Multicommodity Flow Problems
Management Science
Implementing a proximal algorithm for some nonlinear multicommodity flow problems
Networks - Special Issue on Multicommodity Flows and Network Design
Solving the quadratic trust-region subproblem in a low-memory BFGS framework
Optimization Methods & Software - THE JOINT EUROPT-OMS CONFERENCE ON OPTIMIZATION, 4-7 JULY, 2007, PRAGUE, CZECH REPUBLIC, PART I
Hi-index | 0.00 |
We propose a new algorithm for linearly constrained strictly convex problems. This algorithm follows the characterization of saddle points introduced earlier in ref. [Ouorou, A., 2000, A primal-dual algorithm for monotropic programming and its application to network optimization. Computational Optimization and Applications, 15(2), 125-143.], using two different augmented Lagrangian functions defined for the primal problem and its dual. The saddle points may be computed in a variety of ways. We propose a scheme that results in a special implementation of Martinet's proximal algorithm ref. [Martinet, B. 1970, Régularisation d'inéquations variationnelles par approximations successives. Revue Franç'Informatique et de Recherche Opérationnelle, 3, 154-179.]. In the primal space, the resulting algorithm appears as a nonsmooth version of the projection algorithm by Rosen ref. [Rosen, J.B., 1960, The gradient projected method for nonlinear programming, part I: Linear Constraints. Journal of the Society for Industrial and Applied Mathematics, 8, 181-217.]. The dual iterates are generated through an unconstrained subproblem which can be solved efficiently by L-BFGS refs. [Byrd, R., Lu, P., Nocedal, J. and Zhu, C. 1995, A limited memory algorithm for bound constrained optimization. SIAM Journal on Scientific Computation, 16(5), 1190-1208.; Liu, D. and Nocedal, J., 1989, On the limited memory BFGS method for large scale optimization. Mathematical Programming B 45, 503-528.]. We establish convergence with no use of the concept of resolvent of maximal monotone operators. To assess the numerical behaviour of the algorithm, we use some randomly generated quadratic network flow problems and compare it with PPRN, a specialized code for linear and nonlinear cost network flow problems.