Privacy-preserving data mining: A feature set partitioning approach
Information Sciences: an International Journal
PCTA: privacy-constrained clustering-based transaction data anonymization
Proceedings of the 4th International Workshop on Privacy and Anonymity in the Information Society
Utility-guided Clustering-based Transaction Data Anonymization
Transactions on Data Privacy
Information based data anonymization for classification utility
Data & Knowledge Engineering
Improvements on a privacy-protection algorithm for DNA sequences with generalization lattices
Computer Methods and Programs in Biomedicine
Editorial: Guest editorial: Special issue on data mining for information security
Information Sciences: an International Journal
Privacy-preserving trajectory data publishing by local suppression
Information Sciences: an International Journal
Anonymizing classification data using rough set theory
Knowledge-Based Systems
Improving accuracy of classification models induced from anonymized datasets
Information Sciences: an International Journal
Hi-index | 0.00 |
Many applications that employ data mining techniques involve mining data that include private and sensitive information about the subjects. One way to enable effective data mining while preserving privacy is to anonymize the data set that includes private information about subjects before being released for data mining. One way to anonymize data set is to manipulate its content so that the records adhere to k-anonymity. Two common manipulation techniques used to achieve k-anonymity of a data set are generalization and suppression. Generalization refers to replacing a value with a less specific but semantically consistent value, while suppression refers to not releasing a value at all. Generalization is more commonly applied in this domain since suppression may dramatically reduce the quality of the data mining results if not properly used. However, generalization presents a major drawback as it requires a manually generated domain hierarchy taxonomy for every quasi-identifier in the data set on which k-anonymity has to be performed. In this paper, we propose a new method for achieving k-anonymity named K-anonymity of Classification Trees Using Suppression (kACTUS). In kACTUS, efficient multidimensional suppression is performed, i.e., values are suppressed only on certain records depending on other attribute values, without the need for manually produced domain hierarchy trees. Thus, in kACTUS, we identify attributes that have less influence on the classification of the data records and suppress them if needed in order to comply with k-anonymity. The kACTUS method was evaluated on 10 separate data sets to evaluate its accuracy as compared to other k-anonymity generalization- and suppression-based methods. Encouraging results suggest that kACTUS' predictive performance is better than that of existing k-anonymity algorithms. Specifically, on average, the accuracies of TDS, TDR, and kADET are lower than kACTUS in 3.5, 3.3, and 1.9 percent, respectively, despite their usage of manually defined domain trees. The accuracy gap is increased to 5.3, 4.3, and 3.1 percent, respectively, when no domain trees are used.