Universal one-way hash functions and their cryptographic applications
STOC '89 Proceedings of the twenty-first annual ACM symposium on Theory of computing
Universal hashing and authentication codes
Designs, Codes and Cryptography
Fast Probabilistic Algorithms for Verification of Polynomial Identities
Journal of the ACM (JACM)
L-collision Attacks against Randomized MACs
CRYPTO '00 Proceedings of the 20th Annual International Cryptology Conference on Advances in Cryptology
LFSR-based Hashing and Authentication
CRYPTO '94 Proceedings of the 14th Annual International Cryptology Conference on Advances in Cryptology
Bucket Hashing and its Application to Fast Message Authentication
CRYPTO '95 Proceedings of the 15th Annual International Cryptology Conference on Advances in Cryptology
Keying Hash Functions for Message Authentication
CRYPTO '96 Proceedings of the 16th Annual International Cryptology Conference on Advances in Cryptology
On the Security of Randomized CBC-MAC Beyond the Birthday Paradox Limit: A New Construction
FSE '02 Revised Papers from the 9th International Workshop on Fast Software Encryption
Pseudorandom functions revisited: the cascade construction and its concrete security
FOCS '96 Proceedings of the 37th Annual Symposium on Foundations of Computer Science
On the distribution of the number of roots of polynomials and explicit weak designs
Random Structures & Algorithms
A computational introduction to number theory and algebra
A computational introduction to number theory and algebra
ICALP'06 Proceedings of the 33rd international conference on Automata, Languages and Programming - Volume Part II
The Poly1305-AES message-authentication code
FSE'05 Proceedings of the 12th international conference on Fast Software Encryption
Improved security analyses for CBC MACs
CRYPTO'05 Proceedings of the 25th annual international conference on Advances in Cryptology
Stronger security bounds for wegman-carter-shoup authenticators
EUROCRYPT'05 Proceedings of the 24th annual international conference on Theory and Applications of Cryptographic Techniques
Strengthening digital signatures via randomized hashing
CRYPTO'06 Proceedings of the 26th annual international conference on Advances in Cryptology
Key-Recovery Attacks on Universal Hash Function Based MAC Algorithms
CRYPTO 2008 Proceedings of the 28th Annual conference on Cryptology: Advances in Cryptology
How to thwart birthday attacks against MACs via small randomness
FSE'10 Proceedings of the 17th international conference on Fast software encryption
PMAC with parity: minimizing the query-length influence
CT-RSA'12 Proceedings of the 12th conference on Topics in Cryptology
Message authentication, revisited
EUROCRYPT'12 Proceedings of the 31st Annual international conference on Theory and Applications of Cryptographic Techniques
Hi-index | 0.00 |
"Hash then encrypt" is an approach to message authentication, where first the message is hashed down using an ε-universal hash function, and then the resulting k-bit value is encrypted, say with a block-cipher. The security of this scheme is proportional to εq2, where q is the number of MACs the adversary can request. As ε is at least 2-k, the best one can hope for is O(q2/2k) security. Unfortunately, such small ε is not achieved by simple hash functions used in practice, such as the polynomial evaluation or the Merkle-Damgård construction, where ε grows with the message length L. The main insight of this work comes from the fact that, by using randomized message preprocessing via a short random salt p (which must then be sent as part of the authentication tag), we can use the "hash then encrypt" paradigm with suboptimal "practical" ε-universal hash functions, and still improve its exact security to optimal O(q2/2k). Specifically, by using at most an O(log L)-bit salt p, one can always regain the optimal exact security O(q2/2k), even in situations where ε grows polynomially with L. We also give very simple preprocessing maps for popular "suboptimal" hash functions, namely polynomial evaluation and the Merkle-Damgård construction. Our results come from a general extension of the classical Carter-Wegman paradigm, which we believe is of independent interest. On a high level, it shows that public randomization allows one to use the potentially much smaller "average-case" collision probability in place of the "worst-case" collision probability ε.