Privacy-preservation for gradient descent methods

  • Authors:
  • Li Wan;Wee Keong Ng;Shuguo Han;Vincent C. S. Lee

  • Affiliations:
  • Nanyang Technological University;Nanyang Technological University;Nanyang Technological University;Monash University

  • Venue:
  • Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Gradient descent is a widely used paradigm for solving many optimization problems. Stochastic gradient descent performs a series of iterations to minimize a target function in order to reach a local minimum. In machine learning or data mining, this function corresponds to a decision model that is to be discovered. The gradient descent paradigm underlies many commonly used techniques in data mining and machine learning, such as neural networks, Bayesian networks, genetic algorithms, and simulated annealing. To the best of our knowledge, there has not been any work that extends the notion of privacy preservation or secure multi-party computation to gradient-descent-based techniques. In this paper, we propose a preliminary approach to enable privacy preservation in gradient descent methods in general and demonstrate its feasibility in specific gradient descent methods.