Protecting Respondents' Identities in Microdata Release
IEEE Transactions on Knowledge and Data Engineering
Achieving k-anonymity privacy protection using generalization and suppression
International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems
Incognito: efficient full-domain K-anonymity
Proceedings of the 2005 ACM SIGMOD international conference on Management of data
\ell -Diversity: Privacy Beyond \kappa -Anonymity
ICDE '06 Proceedings of the 22nd International Conference on Data Engineering
Anatomy: simple and effective privacy preservation
VLDB '06 Proceedings of the 32nd international conference on Very large data bases
M-invariance: towards privacy preserving re-publication of dynamic datasets
Proceedings of the 2007 ACM SIGMOD international conference on Management of data
The cost of privacy: destruction of data-mining utility in anonymized data publishing
Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining
Resisting structural re-identification in anonymized social networks
Proceedings of the VLDB Endowment
Anonymizing bipartite graph data using safe groupings
Proceedings of the VLDB Endowment
Preserving Privacy in Social Networks Against Neighborhood Attacks
ICDE '08 Proceedings of the 2008 IEEE 24th International Conference on Data Engineering
Secure anonymization for incremental datasets
SDM'06 Proceedings of the Third VLDB international conference on Secure Data Management
Algorithm-safe privacy-preserving data publishing
Proceedings of the 13th International Conference on Extending Database Technology
Synthesizing: art of anonymization
DEXA'10 Proceedings of the 21st international conference on Database and expert systems applications: Part I
A family of enhanced (L,α)-diversity models for privacy preserving data publishing
Future Generation Computer Systems
Hi-index | 0.00 |
Before sharing to support ad hoc aggregate analyses, microdata often need to be anonymized to protect the privacy of individuals. A variety of privacy models have been proposed for microdata anonymization. Many of these models (e.g., t-closeness) essentially require that, after anonymization, groups of sensitive attribute values follow specified distributions. To support such models, in this paper we study the problem of transforming a group of sensitive attribute values to follow a certain target distribution with minimal data distortion. Specifically, we develop and evaluate a novel methodology that combines the use of sensitive attribute permutation and generalization with the addition of fake sensitive attribute values to achieve this transformation. We identify metrics related to accuracy of aggregate query answers over the transformed data, and develop efficient anonymization algorithms to optimize these accuracy metrics. Using a variety of data sets, we experimentally demonstrate the effectiveness of our techniques.