Identifying and Preventing Data Leakage in Multi-relational Classification

  • Authors:
  • Hongyu Guo;Herna L. Viktor;Eric Paquet

  • Affiliations:
  • -;-;-

  • Venue:
  • ICDMW '10 Proceedings of the 2010 IEEE International Conference on Data Mining Workshops
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Relational database mining, where data are mined across multiple relations, is increasingly commonplace. When considering a complex database schema, it becomes difficult to identify all possible relationships between attributes from the different relations. That is, seemingly harmless attributes may be linked to confidential information, leading to data leaks when building a model. In this way, we are at risk of disclosing unwanted knowledge when publishing the results of a data mining exercise. For instance, consider a financial database classification task to determine whether a loan is considered to be high risk. Suppose that we are aware that the database contains another confidential attribute, such as income level, which should not be divulged. In order to prevent potential privacy leakage, one may thus choose to eliminate, or distort, the income level from the database. However, even after distortion, a learning model against the modified database may accurately determine the income level values. It follows that the database is still unsafe and may be compromised. This paper demonstrates this potential for privacy leakage in multirelational classification and illustrates how such potential leaks may be detected. We propose a method to generate a ranked list of sub schemas which maintains the predictive performance on the class attribute, while limiting the disclosure risk, and predictive accuracy, of confidential attributes. We illustrate our method against a financial database.