Redundant noisy attributes, attribute errors, and linear-threshold learning using winnow
COLT '91 Proceedings of the fourth annual workshop on Computational learning theory
Learning in the presence of malicious errors
SIAM Journal on Computing
Nash q-learning for general-sum stochastic games
The Journal of Machine Learning Research
Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
Mechanism Design via Machine Learning
FOCS '05 Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science
Distributed Interpretation: A Model and Experiment
IEEE Transactions on Computers
Incentive compatible regression learning
Proceedings of the nineteenth annual ACM-SIAM symposium on Discrete algorithms
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 1
Towards a theory of incentives in machine learning
ACM SIGecom Exchanges
Approximate mechanism design without money
Proceedings of the 10th ACM conference on Electronic commerce
Strategyproof classification with shared inputs
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Proceedings of the 11th ACM conference on Electronic commerce
On the limits of dictatorial classification
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
Tight bounds for strategyproof classification
The 10th International Conference on Autonomous Agents and Multiagent Systems - Volume 1
On strategy-proof allocation without payments or priors
WINE'11 Proceedings of the 7th international conference on Internet and Network Economics
Algorithms for strategyproof classification
Artificial Intelligence
ACM SIGecom Exchanges
Hi-index | 0.00 |
We consider the following setting: a decision maker must make a decision based on reported data points with binary labels. Subsets of data points are controlled by different selfish agents, which might misreport the labels in order to sway the decision in their favor. We design mechanisms (both deterministic and randomized) that reach an approximately optimal decision and are strategyproof, i.e., agents are best off when they tell the truth. We then recast our results into a classical machine learning classification framework, where the decision maker must make a decision (choose between the constant positive hypothesis and the constant negative hypothesis) based only on a sampled subset of the agents' points.