On the identification of active constraints
SIAM Journal on Numerical Analysis
On identification of active constraints II: the nonconvex case
SIAM Journal on Numerical Analysis
An implementation of the look-ahead Lanczos algorithm for non-Hermitian matrices
SIAM Journal on Scientific Computing
Sizing and least-change secant methods
SIAM Journal on Numerical Analysis
Sparse QR factorization in MATLAB
ACM Transactions on Mathematical Software (TOMS)
Finding an interior point in the optimal face of linear programs
Mathematical Programming: Series A and B
Stability of Linear Equations Solvers in Interior-Point Methods
SIAM Journal on Matrix Analysis and Applications
Multifrontal Computation with the Orthogonal Factors of Sparse Matrices
SIAM Journal on Matrix Analysis and Applications
A Sparse Approximate Inverse Preconditioner for the Conjugate Gradient Method
SIAM Journal on Scientific Computing
Combining interior-point and pivoting algorithms for linear programming
Management Science
A QMR-based interior-point algorithm for solving linear programs
Mathematical Programming: Series A and B - Special issue: interior point methods in theory and practice
Stability of Augmented System Factorizations in Interior-Point Methods
SIAM Journal on Matrix Analysis and Applications
Error bounds in mathematical programming
Mathematical Programming: Series A and B - Special issue: papers from ismp97, the 16th international symposium on mathematical programming, Lausanne EPFL
Iterative methods for solving linear systems
Iterative methods for solving linear systems
A Sparse Approximate Inverse Preconditioner for Nonsymmetric Linear Systems
SIAM Journal on Scientific Computing
Mathematics of Operations Research
LSQR: An Algorithm for Sparse Linear Equations and Sparse Least Squares
ACM Transactions on Mathematical Software (TOMS)
Ill-Conditioning and Computational Error in Interior Methods for Nonlinear Programming
SIAM Journal on Optimization
Linear Programming in O([n3/ln n]L) Operations
SIAM Journal on Optimization
A Study of Preconditioners for Network Interior Point Methods
Computational Optimization and Applications
Computational Experience and the Explanatory Value of Condition Measures for Linear Optimization
SIAM Journal on Optimization
Preconditioning Indefinite Systems in Interior Point Methods for Optimization
Computational Optimization and Applications
Numerical Methods for Unconstrained Optimization and Nonlinear Equations (Classics in Applied Mathematics, 16)
Computational Optimization and Applications
Robust solutions of uncertain linear programs
Operations Research Letters
Strong duality and minimal representations for cone optimization
Computational Optimization and Applications
Hi-index | 0.00 |
This paper studies a primal---dual interior/exterior-point path-following approach for linear programming that is motivated on using an iterative solver rather than a direct solver for the search direction. We begin with the usual perturbed primal---dual optimality equations. Under nondegeneracy assumptions, this nonlinear system is well-posed, i.e. it has a nonsingular Jacobian at optimality and is not necessarily ill-conditioned as the iterates approach optimality. Assuming that a basis matrix (easily factorizable and well-conditioned) can be found, we apply a simple preprocessing step to eliminate both the primal and dual feasibility equations. This results in a single bilinear equation that maintains the well-posedness property. Sparsity is maintained. We then apply either a direct solution method or an iterative solver (within an inexact Newton framework) to solve this equation. Since the linearization is well posed, we use affine scaling and do not maintain nonnegativity once we are close enough to the optimum, i.e. we apply a change to a pure Newton step technique. In addition, we correctly identify some of the primal and dual variables that converge to 0 and delete them (purify step).We test our method with random nondegenerate problems and problems from the Netlib set, and we compare it with the standard Normal Equations NEQ approach. We use a heuristic to find the basis matrix. We show that our method is efficient for large, well-conditioned problems. It is slower than NEQ on ill-conditioned problems, but it yields higher accuracy solutions.