Theoretical Computer Science
Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
Incentive compatible regression learning
Proceedings of the nineteenth annual ACM-SIAM symposium on Discrete algorithms
Approximate mechanism design without money
Proceedings of the 10th ACM conference on Electronic commerce
Strategyproof classification under constant hypotheses: a tale of two functions
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 1
Strategyproof classification with shared inputs
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Tight bounds for strategyproof classification
The 10th International Conference on Autonomous Agents and Multiagent Systems - Volume 1
On strategy-proof allocation without payments or priors
WINE'11 Proceedings of the 7th international conference on Internet and Network Economics
Algorithms for strategyproof classification
Artificial Intelligence
Mechanism design on discrete lines and cycles
Proceedings of the 13th ACM Conference on Electronic Commerce
ACM SIGecom Exchanges
Approximate Mechanism Design without Money
ACM Transactions on Economics and Computation
Hi-index | 0.00 |
In the strategyproof classification setting, a set of labeled examples is partitioned among multiple agents. Given the reported labels, an optimal classification mechanism returns a classifier that minimizes the number of mislabeled examples. However, each agent is interested in the accuracy of the returned classifier on its own examples, and may misreport its labels in order to achieve a better classifier, thus contaminating the dataset. The goal is to design strategyproof mechanisms that correctly label as many examples as possible. Previous work has investigated the foregoing setting under limiting assumptions, or with respect to very restricted classes of classifiers. In this paper, we study the strategyproof classification setting with respect to prominent classes of classifiers---boolean conjunctions and linear separators---and without any assumptions on the input. On the negative side, we show that strategyproof mechanisms cannot achieve a constant approximation ratio, by showing that such mechanisms must be dictatorial on a subdomain, in the sense that the outcome is selected according to the preferences of a single agent. On the positive side, we present a randomized mechanism---Iterative Random Dictator---and demonstrate both that it is strategyproof and that its approximation ratio does not increase with the number of agents. Interestingly, the notion of dictatorship is prominently featured in all our results, helping to establish both upper and lower bounds.