Mathematical Programming: Series A and B
On the expected number of linear complementarity cones intersected by random and semi-random rays
Mathematical Programming: Series A and B
Smoothed analysis of algorithms: why the simplex algorithm usually takes polynomial time
STOC '01 Proceedings of the thirty-third annual ACM symposium on Theory of computing
A Method Enabling Feasible Conformance Test Sequence Generation for EFSM Models
IEEE Transactions on Computers
Smoothed analysis of algorithms: Why the simplex algorithm usually takes polynomial time
Journal of the ACM (JACM)
Improved Smoothed Analysis of the Shadow Vertex Simplex Method
FOCS '05 Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science
Linear Complementarity Algorithms for Infinite Games
SOFSEM '10 Proceedings of the 36th Conference on Current Trends in Theory and Practice of Computer Science
On broadcast scheduling with limited energy
CIAC'06 Proceedings of the 6th Italian conference on Algorithms and Complexity
George Dantzig's impact on the theory of computation
Discrete Optimization
Globally determining a minimum-area rectangle enclosing the projection of a higher-dimensional set
Operations Research Letters
VLSI Design - Special issue on New Algorithmic Techniques for Complex EDA Problems
Hi-index | 0.01 |
It has been a challenge for mathematicians to confirm theoretically the extremely good performance of simplex-type algorithms for linear programming. In this paper the average number of steps performed by a simplex algorithm, the so-called self-dual method, is analyzed. The algorithm is not started at the traditional point (1, … , l)T, but points of the form (1, &egr;, &egr;2, …)T, with &egr; sufficiently small, are used. The result is better, in two respects, than those of the previous analyses. First, it is shown that the expected number of steps is bounded between two quadratic functions c1(min(m, n))2 and c2(min(m, n))2 of the smaller dimension of the problem. This should be compared with the previous two major results in the field. Borgwardt proves an upper bound of O(n4m1/(n-1)) under a model that implies that the zero vector satisfies all the constraints, and also the algorithm under his consideration solves only problems from that particular subclass. Smale analyzes the self-dual algorithm starting at (1, … , 1)T. He shows that for any fixed m there is a constant c(m) such the expected number of steps is less than c(m)(ln n)m(m+1); Megiddo has shown that, under Smale's model, an upper bound C(m) exists. Thus, for the first time, a polynomial upper bound with no restrictions (except for nondegeneracy) on the problem is proved, and, for the first time, a nontrivial lower bound of precisely the same order of magnitude is established. Both Borgwardt and Smale require the input vectors to be drawn from spherically symmetric distributions. In the model in this paper, invariance is required only under certain