Privacy preserving linear discriminant analysis from perturbed data

  • Authors:
  • Somnath Chakrabarti;Zhiyuan Chen;Aryya Gangopadhyay;Shibnath Mukherjee

  • Affiliations:
  • University of Maryland, Baltimore, MD;University of Maryland, Baltimore, MD;University of Maryland, Baltimore, MD;Yahoo! Research and Development, India and University of Maryland, Baltimore, MD

  • Venue:
  • Proceedings of the 2010 ACM Symposium on Applied Computing
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

The ubiquity of the internet not only makes it very convenient for individuals or organizations to share data for data mining or statistical analysis, but also greatly increases the chance of privacy breach. There exist many techniques such as random perturbation to protect the privacy of such data sets. However, perturbation often has negative impacts on the quality of data mining or statistical analysis conducted over the perturbed data. This paper studies the impact of random perturbation for a popular data mining and analysis method: linear discriminant analysis. The contributions are two fold. First, we discover that for large data sets, the impact of perturbation is quite limited (i.e., high quality results may be obtained directly from perturbed data) if the perturbation process satisfies certain conditions. Second, we discover that for small data sets, the negative impact of perturbation can be reduced by publishing additional statistics about the perturbation along with the perturbed data. We provide both theoretical derivations and experimental verifications of these results.