Protecting Respondents' Identities in Microdata Release
IEEE Transactions on Knowledge and Data Engineering
Top-Down Specialization for Information and Privacy Preservation
ICDE '05 Proceedings of the 21st International Conference on Data Engineering
Incognito: efficient full-domain K-anonymity
Proceedings of the 2005 ACM SIGMOD international conference on Management of data
Mondrian Multidimensional K-Anonymity
ICDE '06 Proceedings of the 22nd International Conference on Data Engineering
\ell -Diversity: Privacy Beyond \kappa -Anonymity
ICDE '06 Proceedings of the 22nd International Conference on Data Engineering
(α, k)-anonymity: an enhanced k-anonymity model for privacy preserving data publishing
Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining
Utility-based anonymization using local recoding
Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining
Anatomy: simple and effective privacy preservation
VLDB '06 Proceedings of the 32nd international conference on Very large data bases
Information disclosure under realistic assumptions: privacy versus optimality
Proceedings of the 14th ACM conference on Computer and communications security
Minimality attack in privacy preserving data publishing
VLDB '07 Proceedings of the 33rd international conference on Very large data bases
Anonymization-based attacks in privacy-preserving data publishing
ACM Transactions on Database Systems (TODS)
Injector: Mining Background Knowledge for Data Anonymization
ICDE '08 Proceedings of the 2008 IEEE 24th International Conference on Data Engineering
Attacks on privacy and deFinetti's theorem
Proceedings of the 2009 ACM SIGMOD International Conference on Management of data
Anonymized data: generation, models, usage
Proceedings of the 2009 ACM SIGMOD International Conference on Management of data
Transparent anonymization: Thwarting adversaries who know the algorithm
ACM Transactions on Database Systems (TODS)
ICDT'05 Proceedings of the 10th international conference on Database Theory
Differentially private data release for data mining
Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining
Personal privacy vs population privacy: learning to attack anonymization
Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining
Anonymizing set-valued data by nonreciprocal recoding
Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining
Publishing microdata with a robust privacy guarantee
Proceedings of the VLDB Endowment
Secure distributed framework for achieving ε-differential privacy
PETS'12 Proceedings of the 12th international conference on Privacy Enhancing Technologies
A propagation model for provenance views of public/private workflows
Proceedings of the 16th International Conference on Database Theory
Efficient Time-Stamped Event Sequence Anonymization
ACM Transactions on the Web (TWEB)
Multivariate microaggregation by iterative optimization
Applied Intelligence
Hi-index | 0.00 |
The principle of anonymization for data sharing has become a very popular paradigm for the preservation of privacy of the data subjects. Since the introduction of k-anonymity, dozens of methods and enhanced privacy definitions have been proposed. However, over-eager attempts to minimize the information lost by the anonymization potentially allow private information to be inferred. Proof-of-concept of this "minimality attack" has been demonstrated for a variety of algorithms and definitions [16]. In this paper, we provide a comprehensive analysis and study of this attack, and demonstrate that with care its effect can be almost entirely countered. The attack allows an adversary to increase his (probabilistic) belief in certain facts about individuals over the data. We show that (a) a large class of algorithms are not affected by this attack, (b) for a class of algorithms that have a "symmetric" property, the attacker's belief increases by at most a small constant, and (c) even for an algorithm chosen to be highly susceptible to the attack, the attacker's belief when using the attack increases by at most a small constant factor. We also provide a series of experiments that show in all these cases that the confidence about the sensitive value of any individual remains low in practice, while the published data is still useful for its intended purpose. From this, we conclude that the impact of such method-based attacks can be minimized.