On sampling, anonymization, and differential privacy or, k-anonymization meets differential privacy

  • Authors:
  • Ninghui Li;Wahbeh Qardaji;Dong Su

  • Affiliations:
  • Purdue University, West Lafayette, IN;Purdue University, West Lafayette, IN;Purdue University, West Lafayette, IN

  • Venue:
  • Proceedings of the 7th ACM Symposium on Information, Computer and Communications Security
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper aims at answering the following two questions in privacy-preserving data analysis and publishing. The first is: What formal privacy guarantee (if any) does k-anonymization methods provide? k-Anonymization methods have been studied extensively in the database community, but have been known to lack strong privacy guarantees. The second question is: How can we benefit from the adversary's uncertainty about the data? More specifically, can we come up a meaningful relaxation of differential privacy [2, 3] by exploiting the adversary's uncertainty about the dataset? We now discuss these two motivations in more detail.