Preprocessing of min ones problems: a dichotomy
ICALP'10 Proceedings of the 37th international colloquium conference on Automata, languages and programming
Compression via matroids: a randomized polynomial kernel for odd cycle transversal
Proceedings of the twenty-third annual ACM-SIAM symposium on Discrete Algorithms
Weak compositions and their applications to polynomial lower bounds for kernelization
Proceedings of the twenty-third annual ACM-SIAM symposium on Discrete Algorithms
Co-nondeterminism in compositions: a kernelization lower bound for a Ramsey-type problem
Proceedings of the twenty-third annual ACM-SIAM symposium on Discrete Algorithms
Clique cover and graph separation: new incompressibility results
ICALP'12 Proceedings of the 39th international colloquium conference on Automata, Languages, and Programming - Volume Part I
Kernel lower bounds using co-nondeterminism: finding induced hereditary subgraphs
SWAT'12 Proceedings of the 13th Scandinavian conference on Algorithm Theory
Reducing a target interval to a few exact queries
MFCS'12 Proceedings of the 37th international conference on Mathematical Foundations of Computer Science
Homomorphic hashing for sparse coefficient extraction
IPEC'12 Proceedings of the 7th international conference on Parameterized and Exact Computation
Shielding circuits with groups
Proceedings of the forty-fifth annual ACM symposium on Theory of computing
Making queries tractable on big data with preprocessing: through the eyes of complexity theory
Proceedings of the VLDB Endowment
Preprocessing subgraph and minor problems: When does a small vertex cover help?
Journal of Computer and System Sciences
Hi-index | 0.00 |
We study compression that preserves the solution to an instance of a problem rather than preserving the instance itself. Our focus is on the compressibility of $\mathcal{NP}$ decision problems. We consider $\mathcal{NP}$ problems that have long instances but relatively short witnesses. The question is whether one can efficiently compress an instance and store a shorter representation that maintains the information of whether the original input is in the language or not. We want the length of the compressed instance to be polynomial in the length of the witness and polylog in the length of original input. Such compression enables succinctly storing instances until a future setting will allow solving them, either via a technological or algorithmic breakthrough or simply until enough time has elapsed. In this paper, we first develop the basic complexity theory of compression, including reducibility, completeness, and a stratification of $\mathcal{NP}$ with respect to compression. We then show that compressibility (say, of SAT) would have vast implications for cryptography, including constructions of one-way functions and collision resistant hash functions from any hard-on-average problem in $\mathcal{NP}$ and cryptanalysis of key agreement protocols in the “bounded storage model” when mixed with (time) complexity-based cryptography.