k-anonymity: a model for protecting privacy
International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems
Achieving k-anonymity privacy protection using generalization and suppression
International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems
On the complexity of optimal K-anonymity
PODS '04 Proceedings of the twenty-third ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems
Invitation to data reduction and problem kernelization
ACM SIGACT News
A Critique of k-Anonymity and Some of Its Enhancements
ARES '08 Proceedings of the 2008 Third International Conference on Availability, Reliability and Security
Incompressibility through Colors and IDs
ICALP '09 Proceedings of the 36th International Colloquium on Automata, Languages and Programming: Part I
Towards Fully Multivariate Algorithmics: Some New Results and Directions in Parameter Ecology
Combinatorial Algorithms
Kernelization: New Upper and Lower Bound Techniques
Parameterized and Exact Computation
Privacy-preserving data publishing: A survey of recent developments
ACM Computing Surveys (CSUR)
Achieving anonymity via clustering
ACM Transactions on Algorithms (TALG)
Resolving the complexity of some data privacy problems
ICALP'10 Proceedings of the 37th international colloquium conference on Automata, languages and programming: Part II
The effect of homogeneity on the complexity of k-anonymity
FCT'11 Proceedings of the 18th international conference on Fundamentals of computation theory
The effect of homogeneity on the complexity of k-anonymity
FCT'11 Proceedings of the 18th international conference on Fundamentals of computation theory
The effect of homogeneity on the computational complexity of combinatorial data anonymization
Data Mining and Knowledge Discovery
The l-Diversity problem: Tractability and approximability
Theoretical Computer Science
Hi-index | 0.00 |
A matrix M over a fixed alphabet is k-anonymous if every row in M has at least k - 1 identical copies in M. Making a matrix k- anonymous by replacing a minimum number of entries with an additional *-symbol (called "suppressing entries") is known to be NP-hard. This task arises in the context of privacy-preserving publishing. We propose and analyze the computational complexity of an enhanced anonymization model where the user of the k-anonymized data may additionally "guide" the selection of the candidate matrix entries to be suppressed. The basic idea is to express this by means of "pattern vectors" which are part of the input. This can also be interpreted as a sort of clustering process. It is motivated by the observation that the "value" of matrix entries may significantly differ, and losing one (by suppression) may be more harmful than losing the other, which again may very much depend on the intended use of the anonymized data. We show that already very basic special cases of our new model lead to NP-hard problems while others allow for (fixed-parameter) tractability results.