Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
Good learners for evil teachers
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Approximate mechanism design without money
Proceedings of the 10th ACM conference on Electronic commerce
Strategyproof classification under constant hypotheses: a tale of two functions
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 1
Strategyproof classification with shared inputs
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Competitive Repeated Allocation without Payments
WINE '09 Proceedings of the 5th International Workshop on Internet and Network Economics
Proceedings of the 11th ACM conference on Electronic commerce
Asymptotically optimal strategy-proof mechanisms for two-facility games
Proceedings of the 11th ACM conference on Electronic commerce
Truthful assignment without money
Proceedings of the 11th ACM conference on Electronic commerce
On the limits of dictatorial classification
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
Strategy-proof allocation of multiple items between two agents without payments or priors
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
Incentive compatible regression learning
Journal of Computer and System Sciences
Algorithms for strategyproof classification
Artificial Intelligence
Mechanism design on discrete lines and cycles
Proceedings of the 13th ACM Conference on Electronic Commerce
Research proposal: cooperation among self interested agents
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Three
ACM SIGecom Exchanges
Hi-index | 0.00 |
Strategyproof (SP) classification considers situations in which a decision-maker must classify a set of input points with binary labels, minimizing expected error. Labels of input points are reported by self-interested agents, who may lie so as to obtain a classifier more closely matching their own labels. These lies would create a bias in the data, and thus motivate the design of truthful mechanisms that discourage false reporting. We here answer questions left open by previous research on strategyproof classification [12, 13, 14], in particular regarding the best approximation ratio (in terms of social welfare) that an SP mechanism can guarantee for n agents. Our primary result is a lower bound of 3--2/n on the approximation ratio of SP mechanisms under the shared inputs assumption; this shows that the previously known upper bound (for uniform weights) is tight. The proof relies on a result from Social Choice theory, showing that any SP mechanism must select a dictator at random, according to some fixed distribution. We then show how different randomizations can improve the best known mechanism when agents are weighted, matching the lower bound with a tight upper bound. These results contribute both to a better understanding of the limits of SP classification, as well as to the development of similar tools in other, related domains such as SP facility location.