Privacy-preserving incremental data dissemination
Journal of Computer Security - Selected papers from the Third and Fourth Secure Data Management (SDM) workshops
Anonymization-based attacks in privacy-preserving data publishing
ACM Transactions on Database Systems (TODS)
On the tradeoff between privacy and utility in data publishing
Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining
Attacks on privacy and deFinetti's theorem
Proceedings of the 2009 ACM SIGMOD International Conference on Management of data
Privacy-Preserving Data Publishing
Foundations and Trends in Databases
Algorithm-safe privacy-preserving data publishing
Proceedings of the 13th International Conference on Extending Database Technology
A family of enhanced (L,α)-diversity models for privacy preserving data publishing
Future Generation Computer Systems
Relationships and data sanitization: a study in scarlet
Proceedings of the 2010 workshop on New security paradigms
Minimizing minimality and maximizing utility: analyzing method-based attacks on anonymized data
Proceedings of the VLDB Endowment
Preventing range disclosure in k-anonymised data
Expert Systems with Applications: An International Journal
Privacy-preserving publishing microdata with full functional dependencies
Data & Knowledge Engineering
Can the Utility of Anonymized Data be Used for Privacy Breaches?
ACM Transactions on Knowledge Discovery from Data (TKDD)
Privacy-preserving trajectory data publishing by local suppression
Information Sciences: an International Journal
Using safety constraint for transactional dataset anonymization
DBSec'13 Proceedings of the 27th international conference on Data and Applications Security and Privacy XXVII
SPARSI: partitioning sensitive data amongst multiple adversaries
Proceedings of the VLDB Endowment
Hi-index | 0.00 |
Existing work on privacy-preserving data publishing cannot satisfactorily prevent an adversary with background knowledge from learning important sensitive information. The main challenge lies in modeling the adversary's background knowledge. We propose a novel approach to deal with such attacks. In this approach, one first mines knowledge from the data to be released and then uses the mining results as the background knowledge when anonymizing the data. The rationale of our approach is that if certain facts or background knowledge exist, they should manifest themselves in the data and we should be able to find them using data mining techniques. One intriguing aspect of our approach is that one can argue that it improves both privacy and utility at the same time, as it both protects against background knowledge attacks and better preserves the features in the data. We then present the Injector framework for data anonymization. Injector mines negative association rules from the data to be released and uses them in the anonymization process. We also develop an efficient anonymization algorithm to compute the injected tables that incorporates background knowledge. Experimental results show that Injector reduces privacy risks against background knowledge attacks while improving data utility.