Journal of Algorithms - Special issue on SODA '95 papers
A Linear Programming Formulation and Approximation Algorithms for the Metric Labeling Problem
SIAM Journal on Discrete Mathematics
Metric Embeddings with Relaxed Guarantees
FOCS '05 Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science
Proceedings of the thirty-eighth annual ACM symposium on Theory of computing
Mechanism Design via Differential Privacy
FOCS '07 Proceedings of the 48th Annual IEEE Symposium on Foundations of Computer Science
On allocations that maximize fairness
Proceedings of the nineteenth annual ACM-SIAM symposium on Discrete algorithms
Approximating TSP on metrics with bounded global growth
Proceedings of the nineteenth annual ACM-SIAM symposium on Discrete algorithms
On the geometry of differential privacy
Proceedings of the forty-second ACM symposium on Theory of computing
Mechanism design with uncertain inputs: (to err is human, to forgive divine)
Proceedings of the forty-third annual ACM symposium on Theory of computing
ICALP'06 Proceedings of the 33rd international conference on Automata, Languages and Programming - Volume Part II
Calibrating noise to sensitivity in private data analysis
TCC'06 Proceedings of the Third conference on Theory of Cryptography
SAGT'12 Proceedings of the 5th international conference on Algorithmic Game Theory
ACM SIGecom Exchanges
Constrained signaling for welfare and revenue maximization
ACM SIGecom Exchanges
Geo-indistinguishability: differential privacy for location-based systems
Proceedings of the 2013 ACM SIGSAC conference on Computer & communications security
Hi-index | 0.00 |
We study fairness in classification, where individuals are classified, e.g., admitted to a university, and the goal is to prevent discrimination against individuals based on their membership in some group, while maintaining utility for the classifier (the university). The main conceptual contribution of this paper is a framework for fair classification comprising (1) a (hypothetical) task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand; (2) an algorithm for maximizing utility subject to the fairness constraint, that similar individuals are treated similarly. We also present an adaptation of our approach to achieve the complementary goal of "fair affirmative action," which guarantees statistical parity (i.e., the demographics of the set of individuals receiving any classification are the same as the demographics of the underlying population), while treating similar individuals as similarly as possible. Finally, we discuss the relationship of fairness to privacy: when fairness implies privacy, and how tools developed in the context of differential privacy may be applied to fairness.