Protecting Respondents' Identities in Microdata Release
IEEE Transactions on Knowledge and Data Engineering
k-anonymity: a model for protecting privacy
International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems
Achieving k-anonymity privacy protection using generalization and suppression
International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems
Protecting Location Privacy with Personalized k-Anonymity: Architecture and Algorithms
IEEE Transactions on Mobile Computing
Differential privacy: a survey of results
TAMC'08 Proceedings of the 5th international conference on Theory and applications of models of computation
ICALP'06 Proceedings of the 33rd international conference on Automata, Languages and Programming - Volume Part II
When random sampling preserves privacy
CRYPTO'06 Proceedings of the 26th annual international conference on Advances in Cryptology
Calibrating noise to sensitivity in private data analysis
TCC'06 Proceedings of the Third conference on Theory of Cryptography
Membership privacy: a unifying framework for privacy definitions
Proceedings of the 2013 ACM SIGSAC conference on Computer & communications security
Hi-index | 0.00 |
This paper aims at answering the following two questions in privacy-preserving data analysis and publishing. The first is: What formal privacy guarantee (if any) does k-anonymization methods provide? k-Anonymization methods have been studied extensively in the database community, but have been known to lack strong privacy guarantees. The second question is: How can we benefit from the adversary's uncertainty about the data? More specifically, can we come up a meaningful relaxation of differential privacy [2, 3] by exploiting the adversary's uncertainty about the dataset? We now discuss these two motivations in more detail.