Privacy inference attacking and prevention on multiple relative k-anonymized microdata sets

  • Authors:
  • Yalong Dong;Zude Li;Xiaojun Ye

  • Affiliations:
  • Key Laboratory for Information System Security, Ministry of Education, School of Software, Tsinghua University, Beijing, China;Computer Science Dept., University of Western Ontario, London, Ontario, Canada;Key Laboratory for Information System Security, Ministry of Education, School of Software, Tsinghua University, Beijing, China

  • Venue:
  • APWeb'08 Proceedings of the 10th Asia-Pacific web conference on Progress in WWW research and development
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

In k-anonymity modeling process, it is widely assumed that a relational table of microdata is published with a single sensitive attribute. This assumption is too simple and unreasonable. We observe that multiple sensitive attributes in one or more tables may incur privacy inference violations that are not visible under the single sensitive attribute assumption. In this paper, a new (k, l)-anonymity model is introduced beyond the existed l-diversity mechanism, which is an improved microdata publication model that can effectively prevent these multiple-attributed privacy violations. The (k, l)-anonymity process consists of two phases: k-anonymization on identifying attributes and l-diversity on sensitive attributes. The related (k, l)-anonymity algorithms are proposed and the data generalization metric is provided for minimizing the anonymization cost. A running example illustrates this technique in detail, which also convinces its effectiveness.