Fairness-Aware classifier with prejudice remover regularizer

  • Authors:
  • Toshihiro Kamishima;Shotaro Akaho;Hideki Asoh;Jun Sakuma

  • Affiliations:
  • National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Ibaraki, Japan;National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Ibaraki, Japan;National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Ibaraki, Japan;University of Tsukuba, Tsukuba, Japan, Japan Science and Technology Agency, Kawaguchi, Saitama, Japan

  • Venue:
  • ECML PKDD'12 Proceedings of the 2012 European conference on Machine Learning and Knowledge Discovery in Databases - Volume Part II
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

With the spread of data mining technologies and the accumulation of social data, such technologies and data are being used for determinations that seriously affect individuals' lives. For example, credit scoring is frequently determined based on the records of past credit data together with statistical prediction techniques. Needless to say, such determinations must be nondiscriminatory and fair in sensitive features, such as race, gender, religion, and so on. Several researchers have recently begun to attempt the development of analysis techniques that are aware of social fairness or discrimination. They have shown that simply avoiding the use of sensitive features is insufficient for eliminating biases in determinations, due to the indirect influence of sensitive information. In this paper, we first discuss three causes of unfairness in machine learning. We then propose a regularization approach that is applicable to any prediction algorithm with probabilistic discriminative models. We further apply this approach to logistic regression and empirically show its effectiveness and efficiency.