A randomized protocol for signing contracts
Communications of the ACM
One-way functions and Pseudorandom generators
Combinatorica - Theory of Computing
Founding crytpography on oblivious transfer
STOC '88 Proceedings of the twentieth annual ACM symposium on Theory of computing
Key agreement from weak bit agreement
Proceedings of the thirty-seventh annual ACM symposium on Theory of computing
Theory and application of trapdoor functions
SFCS '82 Proceedings of the 23rd Annual Symposium on Foundations of Computer Science
One-way functions are essential for complexity based cryptography
SFCS '89 Proceedings of the 30th Annual Symposium on Foundations of Computer Science
EUROCRYPT'99 Proceedings of the 17th international conference on Theory and application of cryptographic techniques
Hardness amplification of weakly verifiable puzzles
TCC'05 Proceedings of the Second international conference on Theory of Cryptography
CRYPTO'05 Proceedings of the 25th annual international conference on Advances in Cryptology
General hardness amplification of predicates and puzzles
TCC'11 Proceedings of the 8th conference on Theory of cryptography
Counterexamples to hardness amplification beyond negligible
TCC'12 Proceedings of the 9th international conference on Theory of Cryptography
Hi-index | 0.01 |
What happens when you use a partially defective bitcommitment protocol to commit to the same bit many times? For example, suppose that the protocol allows the receiver to guess the committed bit with advantage Ɛ, and that you used that protocol to commit to the same bit more than 1/Ɛ times. Or suppose that you encrypted some message many times (to many people), only to discover later that the encryption scheme that you were using is partially defective, and an eavesdropper has some noticeable advantage in guessing the encrypted message from the ciphertext. Can we at least show that even after many such encryptions, the eavesdropper could not have learned the message with certainty? In this work we take another look at amplification and degradation of computational hardness. We describe a rather generic setting where one can argue about amplification or degradation of computational hardness via sequential repetition of interactive protocols, and prove that in all the cases that we consider, it behaves as one would expect from the corresponding information theoretic bounds. In particular, for the example above we can prove that after committing to the same bit for n times, the receiver's advantage in guessing the encrypted bit is negligibly close to 1 - (1 - Ɛ)n. Our results for hardness amplification follow just by observing that some of the known proofs for Yao's lemmas can be easily extended also to handle interactive protocols. On the other hand, the question of hardness degradation was never considered before as far as we know, and we prove these results from scratch.