An introduction to computational learning theory
An introduction to computational learning theory
Theoretical Computer Science
Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
Incentive compatible regression learning
Proceedings of the nineteenth annual ACM-SIAM symposium on Discrete algorithms
Approximate mechanism design without money
Proceedings of the 10th ACM conference on Electronic commerce
Strategyproof classification under constant hypotheses: a tale of two functions
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 1
Approximate mechanism design without money
Proceedings of the 10th ACM conference on Electronic commerce
Proceedings of the 11th ACM conference on Electronic commerce
On the limits of dictatorial classification
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
Tight bounds for strategyproof classification
The 10th International Conference on Autonomous Agents and Multiagent Systems - Volume 1
Research proposal: cooperation among self interested agents
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Three
ACM SIGecom Exchanges
Hi-index | 0.00 |
Strategy proof classification deals with a setting where a decision-maker must classify a set of input points with binary labels, while minimizing the expected error. The labels of the input points are reported by self-interested agents, who might lie in order to obtain a classifier that more closely matches their own labels, thus creating a bias in the data; this motivates the design of truthful mechanisms that discourage false reports. Previous work [Meir et al., 2008] investigated both decision-theoretic and learning-theoretic variations of the setting, but only considered classifiers that belong to a degenerate class. In this paper we assume that the agents are interested in a shared set of input points. We show that this plausible assumption leads to powerful results. In particular, we demonstrate that variations of a truthful random dictator mechanism can guarantee approximately optimal outcomes with respect to any class of classifiers.