Cluster ensembles --- a knowledge reuse framework for combining multiple partitions
The Journal of Machine Learning Research
Learning and evaluating classifiers under sample selection bias
ICML '04 Proceedings of the twenty-first international conference on Machine learning
ICDM '04 Proceedings of the Fourth IEEE International Conference on Data Mining
Boosting for transfer learning
Proceedings of the 24th international conference on Machine learning
Privacy-Preserving Data Mining: Models and Algorithms
Privacy-Preserving Data Mining: Models and Algorithms
Discrimination-aware data mining
Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining
The foundations of cost-sensitive learning
IJCAI'01 Proceedings of the 17th international joint conference on Artificial intelligence - Volume 2
Causality: Models, Reasoning and Inference
Causality: Models, Reasoning and Inference
DCUBE: discrimination discovery in databases
Proceedings of the 2010 ACM SIGMOD International Conference on Management of data
Three naive Bayes approaches for discrimination-free classification
Data Mining and Knowledge Discovery
Discrimination Aware Decision Tree Learning
ICDM '10 Proceedings of the 2010 IEEE International Conference on Data Mining
k-NN as an implementation of situation testing for discrimination discovery and prevention
Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining
Leakage in data mining: formulation, detection, and avoidance
Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining
The Filter Bubble: What the Internet Is Hiding from You
The Filter Bubble: What the Internet Is Hiding from You
Handling Conditional Discrimination
ICDM '11 Proceedings of the 2011 IEEE 11th International Conference on Data Mining
Fairness-aware Learning through Regularization Approach
ICDMW '11 Proceedings of the 2011 IEEE 11th International Conference on Data Mining Workshops
Discrimination aware classification for imbalanced datasets
Proceedings of the 22nd ACM international conference on Conference on information & knowledge management
Hi-index | 0.00 |
With the spread of data mining technologies and the accumulation of social data, such technologies and data are being used for determinations that seriously affect individuals' lives. For example, credit scoring is frequently determined based on the records of past credit data together with statistical prediction techniques. Needless to say, such determinations must be nondiscriminatory and fair in sensitive features, such as race, gender, religion, and so on. Several researchers have recently begun to attempt the development of analysis techniques that are aware of social fairness or discrimination. They have shown that simply avoiding the use of sensitive features is insufficient for eliminating biases in determinations, due to the indirect influence of sensitive information. In this paper, we first discuss three causes of unfairness in machine learning. We then propose a regularization approach that is applicable to any prediction algorithm with probabilistic discriminative models. We further apply this approach to logistic regression and empirically show its effectiveness and efficiency.