Foundations and Trends® in Theoretical Computer Science
If NP Languages are Hard on the Worst-Case, Then it is Easy to Find Their Hard Instances
Computational Complexity
Worst-Case to Average-Case Reductions Revisited
APPROX '07/RANDOM '07 Proceedings of the 10th International Workshop on Approximation and the 11th International Workshop on Randomization, and Combinatorial Optimization. Algorithms and Techniques
Some results on average-case hardness within the polynomial hierarchy
FSTTCS'06 Proceedings of the 26th international conference on Foundations of Software Technology and Theoretical Computer Science
Worst-case vs. algorithmic average-case complexity in the polynomial-time hierarchy
APPROX'06/RANDOM'06 Proceedings of the 9th international conference on Approximation Algorithms for Combinatorial Optimization Problems, and 10th international conference on Randomization and Computation
Hi-index | 0.00 |
We prove that if NP 驴 BPP, i.e., if some NP-complete language is worst-case hard, then for every probabilistic algorithm trying to decide the language, there exists some polynomially samplable distribution that is hard for it. That is, the algorithm often errs on inputs from this distribution. This is the first worstcase to average-case reduction for NP of any kind. We stress however, that this does not mean that there exists one fixed samplable distribution that is hard for all probabilistic polynomial time algorithms, which is a pre-requisite assumption needed for OWF and cryptography (even if not a sufficient assumption). Nevertheless, we do show that there is a fixed distribution on instances of NP-complete languages, that is samplable in quasi-polynomial time and is hard for all probabilistic polynomial time algorithms (unless NP is easy in the worst-case). Our results are based on the following lemma that may be of independent interest: Given the description of an efficient (probabilistic) algorithm that fails to solve SAT in the worst-case, we can efficiently generate at most three Boolean formulas (of increasing lengths) such that the algorithm errs on at least one of them.