Probabilistically checkable proofs with zero knowledge
STOC '97 Proceedings of the twenty-ninth annual ACM symposium on Theory of computing
Learning with maximum-entropy distributions
COLT '97 Proceedings of the tenth annual conference on Computational learning theory
Learning with Maximum-Entropy Distributions
Machine Learning
Hi-index | 0.00 |
Abstract: We show that basic problems in reasoning about statistics are NP-hard to even approximately solve. We consider the problem of detecting internal inconsistencies in a set of statistics. We say that a set of statistics is /spl epsiv/-inconsistent if one of the probabilities must be off by at least /spl epsiv/. For a positive constant /spl epsiv/, we show NP-hard to distinguish /spl epsiv/-inconsistent statistics from self-consistent statistics. This result holds when restricted to complete sets of pairwise statistics over Boolean domains. We next consider what may, be determined about distributions with a given (consistent) set of pairwise statistics over Boolean domains. We show it NP-hard to distinguish between the case that Pr(X/sub i//spl and/X/sub j/) is necessarily 0 and the case that Pr(X/sub i//spl and/X/sub j/) can have any value in [0, 1/2 ]. Similarly, we show it NP-hard to distinguish between the case that |Corr(X/sub i/, X/sub j/)|=-1 and the case that |Corr(X/sub i/, X/sub j/)| is unconstrained. Whereas the connection between PCP and hardness of approximations has been known since Feige et al. (1991), we introduce the application of "zero-knowledge" PCP's as a tool for proving NP-hardness results for approximation problems.