Limits of computational differential privacy in the client/server setting

  • Authors:
  • Adam Groce;Jonathan Katz;Arkady Yerukhimovich

  • Affiliations:
  • Dept. of Computer Science, University of Maryland;Dept. of Computer Science, University of Maryland;Dept. of Computer Science, University of Maryland

  • Venue:
  • TCC'11 Proceedings of the 8th conference on Theory of cryptography
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Differential privacy is a well established definition guaranteeing that queries to a database do not reveal "too much" information about specific individuals who have contributed to the database. The standard definition of differential privacy is information theoretic in nature, but it is natural to consider computational relaxations and to explore what can be achieved with respect to such notions. Mironov et al. (Crypto 2009) and McGregor et al. (FOCS 2010) recently introduced and studied several variants of computational differential privacy, and show that in the two-party setting (where data is split between two parties) these relaxations can offer significant advantages. Left open by prior work was the extent, if any, to which computational differential privacy can help in the usual client/server setting where the entire database resides at the server, and the client poses queries on this data. We show, for queries with output in Rn (for constant n) and with respect to a large class of utilities, that any computationally private mechanism can be converted to a statistically private mechanism that is equally efficient and achieves roughly the same utility.