Linear programming in O(n × 3d2) time
Information Processing Letters
On a multidimensional search technique and its application to the Euclidean one centre problem
SIAM Journal on Computing
Lower bounds for orthogonal range searching: part II. The arithmetic model
Journal of the ACM (JACM)
A randomized algorithm for fixed-dimensional linear programming
Mathematical Programming: Series A and B
Partitioning arrangements of lines, part II: applications
Discrete & Computational Geometry
Discrete & Computational Geometry - Selected papers from the fifth annual ACM symposium on computational geometry, Saarbrücken, Germany, June 5-11, 1989
Small-dimensional linear programming and convex hulls made easy
Discrete & Computational Geometry
Random number generation and quasi-Monte Carlo methods
Random number generation and quasi-Monte Carlo methods
Cutting hyperplanes for divide-and-conquer
Discrete & Computational Geometry
ACM Computing Surveys (CSUR)
Las Vegas algorithms for linear and integer programming when the dimension is small
Journal of the ACM (JACM)
Approximations and optimal geometric divide-and-conquer
Selected papers of the 23rd annual ACM symposium on Theory of computing
Randomized algorithms
Derandomization in computational geometry
Journal of Algorithms
Handbook of combinatorics (vol. 2)
On linear-time deterministic algorithms for optimization problems in fixed dimension
Journal of Algorithms
A Spectral Approach to Lower Bounds with Applications to Geometric Searching
SIAM Journal on Computing
Linear Programming in Linear Time When the Dimension Is Fixed
Journal of the ACM (JACM)
A trace bound for the hereditary discrepancy
Proceedings of the sixteenth annual symposium on Computational geometry
The discrepancy method: randomness and complexity
The discrepancy method: randomness and complexity
A Combinatorial Bound for Linear Programming and Related Problems
STACS '92 Proceedings of the 9th Annual Symposium on Theoretical Aspects of Computer Science
Hi-index | 0.00 |
In 1935, van der Corput asked the following question: Given an infinite sequence of reals in [0, 1], define D(n) = sup0≤x≤1||Sn ∩ [O, x]| - nx|, where Sn consists of the first n elements in the sequence. Is it possible for D(n) to stay in O(1)? Many years later, Schmidt proved that D(n) can never be in o(log n). In other words, there are limitations on how well the discrete distribution, x → |Sn ∩ [0, x]|, can simulate the continuous one, x → nx. The study of this intriguing phenomenon and its numerous variants related to the irregularities of distributions has given rise to discrepancy theory. The relevance of the subject to complexity theory is most evident in the study of probabilistic algorithms. Suppose that we feed a probabilistic algorithm not with a perfectly random sequence of bits (as is usually required) but one that is only pseudorandom or even deterministic. Should performance necessarily suffer? In particular, suppose that one could trade an exponential-size probability space for one of polynomial size without letting the algorithm realize the change. This form of derandomization can be expressed by saying that a very large distribution can be simulated by a small one for the purpose of the algorithm. Put differently, there exists a measure with respect to which the two distributions have low discrepancy. The study of discrepancy theory predates complexity theory and a wealth of mathematical techniques can be brought to bear to prove nontrivial derandomization results. The pipeline of ideas that flows from discrepancy theory to complexity theory constitutes the discrepancy method. We give a few examples in this survey. A more thorough treatment is given in our book [15]. We also briefly discuss the relevance of the discrepancy method to complexity lower bounds.