On the limits of dictatorial classification

  • Authors:
  • Reshef Meir;Ariel D. Procaccia;Jeffrey S. Rosenschein

  • Affiliations:
  • The Hebrew University of Jerusalem;Harvard SEAS;The Hebrew University of Jerusalem

  • Venue:
  • Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

In the strategyproof classification setting, a set of labeled examples is partitioned among multiple agents. Given the reported labels, an optimal classification mechanism returns a classifier that minimizes the number of mislabeled examples. However, each agent is interested in the accuracy of the returned classifier on its own examples, and may misreport its labels in order to achieve a better classifier, thus contaminating the dataset. The goal is to design strategyproof mechanisms that correctly label as many examples as possible. Previous work has investigated the foregoing setting under limiting assumptions, or with respect to very restricted classes of classifiers. In this paper, we study the strategyproof classification setting with respect to prominent classes of classifiers---boolean conjunctions and linear separators---and without any assumptions on the input. On the negative side, we show that strategyproof mechanisms cannot achieve a constant approximation ratio, by showing that such mechanisms must be dictatorial on a subdomain, in the sense that the outcome is selected according to the preferences of a single agent. On the positive side, we present a randomized mechanism---Iterative Random Dictator---and demonstrate both that it is strategyproof and that its approximation ratio does not increase with the number of agents. Interestingly, the notion of dictatorship is prominently featured in all our results, helping to establish both upper and lower bounds.