Privacy preserving Back-propagation neural network learning over arbitrarily partitioned data

  • Authors:
  • Ankur Bansal;Tingting Chen;Sheng Zhong

  • Affiliations:
  • State University of New York at Buffalo, Department of Computer Science and Engineering, 14260, Amherst, NY, USA;State University of New York at Buffalo, Department of Computer Science and Engineering, 14260, Amherst, NY, USA;State University of New York at Buffalo, Department of Computer Science and Engineering, 14260, Amherst, NY, USA

  • Venue:
  • Neural Computing and Applications
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Neural networks have been an active research area for decades. However, privacy bothers many when the training dataset for the neural networks is distributed between two parties, which is quite common nowadays. Existing cryptographic approaches such as secure scalar product protocol provide a secure way for neural network learning when the training dataset is vertically partitioned. In this paper, we present a privacy preserving algorithm for the neural network learning when the dataset is arbitrarily partitioned between the two parties. We show that our algorithm is very secure and leaks no knowledge (except the final weights learned by both parties) about other party’s data. We demonstrate the efficiency of our algorithm by experiments on real world data.