Generalizing data to provide anonymity when disclosing information (abstract)
PODS '98 Proceedings of the seventeenth ACM SIGACT-SIGMOD-SIGART symposium on Principles of database systems
On the design and quantification of privacy preserving data mining algorithms
PODS '01 Proceedings of the twentieth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems
Information-Theoretic Disclosure Risk Measures in Statistical Disclosure Control of Tabular Data
SSDBM '02 Proceedings of the 14th International Conference on Scientific and Statistical Database Management
Limiting privacy breaches in privacy preserving data mining
Proceedings of the twenty-second ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems
Achieving k-anonymity privacy protection using generalization and suppression
International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems
Top-Down Specialization for Information and Privacy Preservation
ICDE '05 Proceedings of the 21st International Conference on Data Engineering
Data Privacy through Optimal k-Anonymization
ICDE '05 Proceedings of the 21st International Conference on Data Engineering
Incognito: efficient full-domain K-anonymity
Proceedings of the 2005 ACM SIGMOD international conference on Management of data
Mondrian Multidimensional K-Anonymity
ICDE '06 Proceedings of the 22nd International Conference on Data Engineering
On the efficiency of checking perfect privacy
Proceedings of the twenty-fifth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems
L-diversity: Privacy beyond k-anonymity
ACM Transactions on Knowledge Discovery from Data (TKDD)
Information disclosure under realistic assumptions: privacy versus optimality
Proceedings of the 14th ACM conference on Computer and communications security
Limiting disclosure in hippocratic databases
VLDB '04 Proceedings of the Thirtieth international conference on Very large data bases - Volume 30
Minimality attack in privacy preserving data publishing
VLDB '07 Proceedings of the 33rd international conference on Very large data bases
Privacy skyline: privacy with multidimensional adversarial knowledge
VLDB '07 Proceedings of the 33rd international conference on Very large data bases
Micro-aggregation-based heuristics for p-sensitive k-anonymity: one step beyond
PAIS '08 Proceedings of the 2008 international workshop on Privacy and anonymity in information society
From t-Closeness to PRAM and Noise Addition Via Information Theory
PSD '08 Proceedings of the UNESCO Chair in data privacy international conference on Privacy in Statistical Databases
On the Bayes risk in information-hiding protocols
Journal of Computer Security - 20th IEEE Computer Security Foundations Symposium (CSF)
A practice-oriented framework for measuring privacy and utility in data sanitization systems
Proceedings of the 2010 EDBT/ICDT Workshops
Differential privacy: a survey of results
TAMC'08 Proceedings of the 5th international conference on Theory and applications of models of computation
ICALP'06 Proceedings of the 33rd international conference on Automata, Languages and Programming - Volume Part II
Information preservation in statistical privacy and bayesian estimation of unattributed histograms
Proceedings of the 2013 ACM SIGMOD International Conference on Management of Data
Hi-index | 0.00 |
Data collection agencies publish sensitive data for legitimate purposes, such as research, marketing and etc. Data publishing has attracted much interest in research community due to the important concerns over the protection of individuals privacy. As a result several sanitization mechanisms with different notions of privacy have been proposed. To be able to measure, set and compare the level of privacy protection, there is a need to translate these different mechanisms to a unified system. In this paper, we propose a novel information theoretic framework for representing a formal model of a mechanism as a noisy channel and evaluating its privacy and utility. We show that deterministic publishing property that is used in most of these mechanisms reduces the privacy guarantees and causes information to leak. The great effect of adversary's background knowledge on this metric is concluded. We also show that using this framework we can compute the sanitization mechanism's preserved utility from the point of view of a data user. By using the specifications of a popular sanitization mechanism, k-anonymity, we analytically provide a representation of this mechanism to be used for its evaluation.