Fairness-aware Learning through Regularization Approach

  • Authors:
  • Toshihiro Kamishima;Shotaro Akaho;Jun Sakuma

  • Affiliations:
  • -;-;-

  • Venue:
  • ICDMW '11 Proceedings of the 2011 IEEE 11th International Conference on Data Mining Workshops
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

With the spread of data mining technologies and the accumulation of social data, such technologies and data are being used for determinations that seriously affect people's lives. For example, credit scoring is frequently determined based on the records of past credit data together with statistical prediction techniques. Needless to say, such determinations must be socially and legally fair from a viewpoint of social responsibility, namely, it must be unbiased and nondiscriminatory in sensitive features, such as race, gender, religion, and so on. Several researchers have recently begun to attempt the development of analysis techniques that are aware of social fairness or discrimination. They have shown that simply avoiding the use of sensitive features is insufficient for eliminating biases in determinations, due to the indirect influence of sensitive information. From a privacy-preserving viewpoint, this can be interpreted as hiding sensitive information when classification results are observed. In this paper, we first discuss three causes of unfairness in machine learning. We then propose a regularization approach that is applicable to any prediction algorithm with probabilistic discriminative models. We further apply this approach to logistic regression and empirically show its effectiveness and efficiency.