Adversarial-knowledge dimensions in data privacy
The VLDB Journal — The International Journal on Very Large Data Bases
A brief survey on anonymization techniques for privacy preserving publishing of social network data
ACM SIGKDD Explorations Newsletter
Data publishing against realistic adversaries
Proceedings of the VLDB Endowment
Optimal random perturbation at multiple privacy levels
Proceedings of the VLDB Endowment
Transparent anonymization: Thwarting adversaries who know the algorithm
ACM Transactions on Database Systems (TODS)
Algorithm-safe privacy-preserving data publishing
Proceedings of the 13th International Conference on Extending Database Technology
Non-homogeneous generalization in privacy preserving data publishing
Proceedings of the 2010 ACM SIGMOD International Conference on Management of data
Versatile publishing for privacy preservation
Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining
Small domain randomization: same privacy, more utility
Proceedings of the VLDB Endowment
Cloning for privacy protection in multiple independent data publications
Proceedings of the 20th ACM international conference on Information and knowledge management
Utility-driven anonymization in data publishing
Proceedings of the 20th ACM international conference on Information and knowledge management
Data privacy against composition attack
DASFAA'12 Proceedings of the 17th international conference on Database Systems for Advanced Applications - Volume Part I
Publishing microdata with a robust privacy guarantee
Proceedings of the VLDB Endowment
Hi-index | 0.00 |
This paper deals with a new type of privacy threat, called "corruption", in anonymized data publication. Specifically, an adversary is said to have corrupted some individuals, if s/he has already obtained their sensitive values before consulting the released information. Conventional generalization may lead to severe privacy disclosure in the presence of corruption. Motivated by this, we advocate an alternative anonymization technique that integrates generalization with perturbation and stratified sampling. The integration provides strong privacy guarantees, even if an adversary has corrupted any number of individuals. We verify the effectiveness of the proposed technique through experiments with real data.