A note on (im)possibilities of obfuscating programs of zero-knowledge proofs of knowledge

  • Authors:
  • Ning Ding;Dawu Gu

  • Affiliations:
  • Department of Computer Science and Engineering, Shanghai Jiao Tong University, China;Department of Computer Science and Engineering, Shanghai Jiao Tong University, China

  • Venue:
  • CANS'11 Proceedings of the 10th international conference on Cryptology and Network Security
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Program obfuscation seeks efficient methods to write programs in an incomprehensible way, while still preserving the functionalities of the programs. In this paper we continue this research w.r.t. zero-knowledge proofs of knowledge. Motivated by both theoretical and practical interests, we ask if the prover and verifier of a zero-knowledge proof of knowledge are obfuscatable. Our answer to this question is as follows. First we present two definitions of obfuscation for interactive probabilistic programs and then achieve the following results: 1 W.r.t. an average-case virtual black-box definition, we achieve some impossibilities of obfuscating provers of zero-knowledge and witness-indistinguishable proofs of knowledge. These results state that the honest prover with an instance and its witness hardwired of any zero-knowledge (or witness-indistinguishable) proof of knowledge of efficient prover's strategy is unobfuscatable if computing a witness (or a second witness) for this instance is hard. Moreover, we extend these results to t-composition setting and achieve similar results. These results imply that if an adversary obtains the prover's code (e.g. stealing a smartcard) he can indeed learn some knowledge from it beyond its functionality no matter what measures the card designer may use for resisting reverse-engineering. W.r.t. a worst-case virtual black-box definition, we provide a possibility of obfuscating the honest verifier (with the public input hardwired) of Blum's 3-round zero-knowledge proof for Hamilton Cycle. Our investigation is motivated by an issue of privacy protection (e.g., if an adversary controls the verifier, he can obtain all provers' names and public inputs. Thus the provers' privacy may leak). We construct an obfuscator for the verifier, which implies that even if an adversary obtains the verifier's code, he cannot learn any knowledge, e.g. provers' names, from it. Thus we realize the anonymity of provers' accesses to the verifier and thus solve the issue of privacy protection.