Privacy consensus in anonymization systems via game theory

  • Authors:
  • Rosa Karimi Adl;Mina Askari;Ken Barker;Reihaneh Safavi-Naini

  • Affiliations:
  • Department of Computer Science, University of Calgary, Calgary, AB, Canada;Department of Computer Science, University of Calgary, Calgary, AB, Canada;Department of Computer Science, University of Calgary, Calgary, AB, Canada;Department of Computer Science, University of Calgary, Calgary, AB, Canada

  • Venue:
  • DBSec'12 Proceedings of the 26th Annual IFIP WG 11.3 conference on Data and Applications Security and Privacy
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Privacy protection appears as a fundamental concern when personal data is collected, stored, and published. Several anonymization methods have been proposed to address privacy issues in private datasets. Every anonymization method has at least one parameter to adjust the level of privacy protection considering some utility for the collected data. Choosing a desirable level of privacy protection is a crucial decision and so far no systematic mechanism exists to provide directions on how to set the privacy parameter. In this paper, we model this challenge in a game theoretic framework to find consensual privacy protection levels and recognize the characteristics of each anonymization method. Our model can potentially be used to compare different anonymization methods and distinguish the settings that make one anonymization method more appealing than the others. We describe the general approach to solve such games and elaborate the procedure using k-anonymity as a sample anonymization method. Our simulations of the game results in the case of k-anonymity reveals how the equilibrium values of k depend on the number of quasi-identifiers, maximum number of repetitive records, anonymization cost, and public's privacy behaviour.