An optimal algorithm for on-line bipartite matching
STOC '90 Proceedings of the twenty-second annual ACM symposium on Theory of computing
Online computation and competitive analysis
Online computation and competitive analysis
Best-fit bin-packing with random order
Proceedings of the seventh annual ACM-SIAM symposium on Discrete algorithms
A multiple-choice secretary algorithm with applications to online auctions
SODA '05 Proceedings of the sixteenth annual ACM-SIAM symposium on Discrete algorithms
Learning Theory: An Approximation Theory Viewpoint (Cambridge Monographs on Applied & Computational Mathematics)
Online budgeted matching in random input models with applications to Adwords
Proceedings of the nineteenth annual ACM-SIAM symposium on Discrete algorithms
Online auctions and generalized secretary problems
ACM SIGecom Exchanges
A Knapsack Secretary Problem with Applications
APPROX '07/RANDOM '07 Proceedings of the 10th International Workshop on Approximation and the 11th International Workshop on Randomization, and Combinatorial Optimization. Algorithms and Techniques
Secretary problems: weights and discounts
SODA '09 Proceedings of the twentieth Annual ACM-SIAM Symposium on Discrete Algorithms
Online Primal-Dual Algorithms for Covering and Packing
Mathematics of Operations Research
The adwords problem: online keyword matching with budgeted bidders under random permutations
Proceedings of the 10th ACM conference on Electronic commerce
Submodular secretary problem and extensions
APPROX/RANDOM'10 Proceedings of the 13th international conference on Approximation, and 14 the International conference on Randomization, and combinatorial optimization: algorithms and techniques
Online stochastic packing applied to display ad allocation
ESA'10 Proceedings of the 18th annual European conference on Algorithms: Part I
Near optimal online algorithms and fast approximation algorithms for resource allocation problems
Proceedings of the 12th ACM conference on Electronic commerce
Introduction to Stochastic Programming
Introduction to Stochastic Programming
Almost-everywhere algorithmic stability and generalization error
UAI'02 Proceedings of the Eighteenth conference on Uncertainty in artificial intelligence
Secretary problems: laminar matroid and interval scheduling
Proceedings of the twenty-second annual ACM-SIAM symposium on Discrete Algorithms
Matroid secretary problem in the random assignment model
Proceedings of the twenty-second annual ACM-SIAM symposium on Discrete Algorithms
Distribution-free performance bounds for potential function rules
IEEE Transactions on Information Theory
Hi-index | 0.00 |
We consider packing LP's with m rows where all constraint coefficients are normalized to be in the unit interval. The n columns arrive in random order and the goal is to set the corresponding decision variables irrevocably when they arrive to obtain a feasible solution maximizing the expected reward. Previous (1−ε)-competitive algorithms require the right-hand side of the LP to be $\Omega (\frac{m}{\epsilon^2} \log \frac{n}{\epsilon})$, a bound that worsens with the number of columns and rows. However, the dependence on the number of columns is not required in the single-row case and known lower bounds for the general case are also independent of n. Our goal is to understand whether the dependence on n is required in the multi-row case, making it fundamentally harder than the single-row version. We refute this by exhibiting an algorithm which is (1−ε)-competitive as long as the right-hand sides are $\Omega (\frac{m^2}{\epsilon^2} \log \frac{m}{\epsilon})$. Our techniques refine previous PAC-learning based approaches which interpret the online decisions as linear classifications of the columns based on sampled dual prices. The key ingredient of our improvement comes from a non-standard covering argument together with the realization that only when the columns of the LP belong to few 1-d subspaces we can obtain small such covers; bounding the size of the cover constructed also relies on the geometry of linear classifiers. General packing LP's are handled by perturbing the input columns, which can be seen as making the learning problem more robust.