A new polynomial-time algorithm for linear programming
Combinatorica
Matrix analysis
A conjugate gradient algorithm for sparse linear inequalities
Journal of Computational and Applied Mathematics
Surrogate methods for linear inequalities
Journal of Optimization Theory and Applications
New iterative methods for linear inequalities
Journal of Optimization Theory and Applications
Linear optimization and extensions: theory and algorithms
Linear optimization and extensions: theory and algorithms
A global Newton method II: analytic centers
Mathematical Programming: Series A and B - Special issue: Festschrift in Honor of Philip Wolfe part II: studies in nonlinear programming
Smoothing methods for convex inequalities and linear complementarity problems
Mathematical Programming: Series A and B
Solving linear inequalities in a least squares sense
SIAM Journal on Scientific Computing - Special issue on iterative methods in numerical linear algebra; selected papers from the Colorado conference
Complexity Analysis of an Interior Cutting Plane Method for Convex Feasibility Problems
SIAM Journal on Optimization
On solving systems of linear inequalities with artificial neural networks
IEEE Transactions on Neural Networks
Hi-index | 7.30 |
The problem of finding an x ∈ Rn such that Ax ≤ b and x ≥ 0 arises in numerous contexts. We propose a new optimization method for solving this feasibility problem. After converting Ax ≤ b into a system of equations by introducing a slack variable for each of the linear inequalities, the method imposes an entropy function over both the original and the slack variables as the objective function. The resulting entropy optimization problem is convex and has an unconstrained convex dual. If the system is consistent and has an interior solution, then a closed-form formula converts the dual optimal solution to the primal optimal solution, which is a feasible solution for the original system of linear inequalities. An algorithm based on the Newton method is proposed for solving the unconstrained dual problem. The proposed algorithm enjoys the global convergence property with a quadratic rate of local convergence. However, if the system is inconsistent, the unconstrained dual is shown to be unbounded. Moreover, the same algorithm can detect possible inconsistency of the system. Our numerical examples reveal the insensitivity of the number of iterations to both the size of the problem and the distance between the initial solution and the feasible region. The performance of the proposed algorithm is compared to that of the surrogate constraint algorithm recently developed by Yang and Murty. Our comparison indicates that the proposed method is particularly suitable when the number of constraints is larger than that of the variables and the initial solution is not close to the feasible region.