k-anonymity: a model for protecting privacy

  • Authors:
  • Latanya Sweeney

  • Affiliations:
  • School of Computer Science, Carnegie Mellon University, Pittsburgh, Pennsylvania

  • Venue:
  • International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems
  • Year:
  • 2002

Quantified Score

Hi-index 0.04

Visualization

Abstract

Consider a data holder, such as a hospital or a bank, that has a privately held collection of person-specific, field structured data. Suppose the data holder wants to share a version of the data with researchers. How can a data holder release a version of its private data with scientific guarantees that the individuals who are the subjects of the data cannot be re-identified while the data remain practically useful? The solution provided in this paper includes a formal protection model named k-anonymity and a set of accompanying policies for deployment. A release provides k-anonymity protection if the information for each person contained in the release cannot be distinguished from at least k-1 individuals whose information also appears in the release. This paper also examines re-identification attacks that can be realized on releases that adhere to k- anonymity unless accompanying policies are respected. The k-anonymity protection model is important because it forms the basis on which the real-world systems known as Datafly, µ-Argus and k-Similar provide guarantees of privacy protection.