C4.5: programs for machine learning
C4.5: programs for machine learning
Privacy-preserving data mining
SIGMOD '00 Proceedings of the 2000 ACM SIGMOD international conference on Management of data
Protecting Respondents' Identities in Microdata Release
IEEE Transactions on Knowledge and Data Engineering
Machine Learning
k-anonymity: a model for protecting privacy
International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems
Privacy preserving mining of association rules
Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining
When do data mining results violate privacy?
Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
Privacy-preserving Bayesian network structure computation on distributed heterogeneous data
Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
Data Privacy through Optimal k-Anonymization
ICDE '05 Proceedings of the 21st International Conference on Data Engineering
Incognito: efficient full-domain K-anonymity
Proceedings of the 2005 ACM SIGMOD international conference on Management of data
\ell -Diversity: Privacy Beyond \kappa -Anonymity
ICDE '06 Proceedings of the 22nd International Conference on Data Engineering
(α, k)-anonymity: an enhanced k-anonymity model for privacy preserving data publishing
Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining
Anatomy: simple and effective privacy preservation
VLDB '06 Proceedings of the 32nd international conference on Very large data bases
M-invariance: towards privacy preserving re-publication of dynamic datasets
Proceedings of the 2007 ACM SIGMOD international conference on Management of data
Maintaining data privacy in association rule mining
VLDB '02 Proceedings of the 28th international conference on Very Large Data Bases
Information disclosure under realistic assumptions: privacy versus optimality
Proceedings of the 14th ACM conference on Computer and communications security
Minimality attack in privacy preserving data publishing
VLDB '07 Proceedings of the 33rd international conference on Very large data bases
Privacy-MaxEnt: integrating background knowledge in privacy quantification
Proceedings of the 2008 ACM SIGMOD international conference on Management of data
Deriving Private Information from Association Rule Mining Results
ICDE '09 Proceedings of the 2009 IEEE International Conference on Data Engineering
Hi-index | 0.00 |
Publishing decision trees can provide enormous benefits to the society. Meanwhile, it is widely believed that publishing decision trees can pose a potential risk to privacy. However, there is not much investigation on the privacy consequence of publishing decision trees. To understand this problem, we need to quantitatively measure privacy risk. Based on the well-established maximum entropy theory, we have developed a systematic method to quantify privacy risks when decision trees are published. Our method converts the knowledge embedded in decision trees into equations and inequalities (called constraints), and then uses nonlinear programming tool to conduct maximum entropy estimate. The estimate results are then used to quantify privacy. We have conducted experiments to evaluate the effectiveness and performance of our method.