Building decision tree classifier on private data
CRPIT '14 Proceedings of the IEEE international conference on Privacy, security and data mining - Volume 14
Foundations of Cryptography: Volume 2, Basic Applications
Foundations of Cryptography: Volume 2, Basic Applications
Privacy-preserving SVM using nonlinear kernels on horizontally partitioned data
Proceedings of the 2006 ACM symposium on Applied computing
Cryptographically private support vector machines
Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining
Fairplay—a secure two-party computation system
SSYM'04 Proceedings of the 13th conference on USENIX Security Symposium - Volume 13
Privacy-preserving Naïve Bayes classification
The VLDB Journal — The International Journal on Very Large Data Bases
How to generate and exchange secrets
SFCS '86 Proceedings of the 27th Annual Symposium on Foundations of Computer Science
Public-key cryptosystems based on composite degree residuosity classes
EUROCRYPT'99 Proceedings of the 17th international conference on Theory and application of cryptographic techniques
Towards privacy-preserving model selection
PinKDD'07 Proceedings of the 1st ACM SIGKDD international conference on Privacy, security, and trust in KDD
On private scalar product computation for privacy-preserving data mining
ICISC'04 Proceedings of the 7th international conference on Information Security and Cryptology
Bloom filter bootstrap: privacy-preserving estimation of the size of an intersection
DBSec'13 Proceedings of the 27th international conference on Data and Applications Security and Privacy XXVII
Hi-index | 0.00 |
Privacy-preserving classification is the task of learning or training a classifier on the union of privately distributed datasets without sharing the datasets. The emphasis of existing studies in privacy-preserving classification has primarily been put on the design of privacy-preserving versions of particular data mining algorithms, However, in classification problems, preprocessing and postprocessing-- such as model selection or attribute selection--play a prominent role in achieving higher classification accuracy. In this paper, we show generalization error of classifiers in privacy-preserving classification can be securely evaluated without sharing prediction results. Our main technical contribution is a new generalized Hamming distance protocol that is universally applicable to preprocessing and postprocessing of various privacy-preserving classification problems, such as model selection in support vector machine and attribute selection in naive Bayes classification.