Protecting Respondents' Identities in Microdata Release
IEEE Transactions on Knowledge and Data Engineering
Measuring information spatial densities
Neural Computation
\ell -Diversity: Privacy Beyond \kappa -Anonymity
ICDE '06 Proceedings of the 22nd International Conference on Data Engineering
On the tradeoff between privacy and utility in data publishing
Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining
Measuring risk and utility of anonymized data using information theory
Proceedings of the 2009 EDBT/ICDT Workshops
From t-Closeness-Like Privacy to Postrandomization via Information Theory
IEEE Transactions on Knowledge and Data Engineering
An information theoretic approach for privacy metrics
Transactions on Data Privacy
Hi-index | 0.00 |
Organizations often need to release microdata without revealing sensitive information. To this scope, data are anonymized and, to assess the quality of the process, various privacy metrics have been proposed, such as k-anonymity, l-diversity, and t-closeness. These metrics are able to capture different aspects of the disclosure risk, imposing minimal requirements on the association of an individual with the sensitive attributes. If we want to combine them in a optimization problem, we need a common framework able to express all these privacy conditions. Previous studies proposed the notion of mutual information to measure the different kinds of disclosure risks and the utility, but, since mutual information is an average quantity, it is not able to completely express these conditions on single records. We introduce here the notion of one-symbol information (i.e., the contribution to mutual information by a single record) that allows to express the disclosure risk metrics. We also show, with a simple example, how l-diversity and t-closeness can be represented in terms of two different, but equally acceptable, conditions on the information gain.